Anshuman is a deeply analytical machine learning engineer who brilliantly connects complex AI concepts with everyday experiences. His tweets reveal a brain wired to decode intricate technical details while making them accessible and relatable. He's a natural explainer and problem solver who thrives on clarity and insight.
Anshuman’s tweets are so heavily laden with Transformer math, even his love life seems to be stuck in multi-head attention—he’s got all the right heads, just waiting for the algorithm to optimize dating outcomes!
His tweet on transformer attention mechanisms humorously and insightfully went viral, garnering over 700K views and 12K likes, establishing him as a go-to voice for insightful ML explanations on social media.
To demystify the complexities of machine learning and AI by translating technical jargon into engaging narratives that educate, inform, and inspire both peers and enthusiasts.
Anshuman values precision, intellectual rigor, and clarity of thought. He believes that understanding complex systems requires breaking down layered information into digestible parts and that knowledge sharing drives progress. He embraces the power of data-driven insights and thoughtful curiosity.
His strengths lie in his exceptional ability to analyze, simplify, and communicate complex AI concepts, turning abstract ideas into relatable stories that resonate widely.
To grow his audience on X, Anshuman should blend his deep technical content with more accessible threads and interactive Q&A sessions, leveraging storytelling and relatable analogies to invite engagement from both experts and curious beginners.
Fun fact: Anshuman masterfully equates romantic relationship dynamics with transformer attention mechanisms, showing his unique ability to blend technical expertise with humor and emotional insight.
Science Fiction, Fantasy, Economics, Analysis. Watch my full videos on X / Twitter under the Highlights tab. #fantasy#puns#economics Anti-Authoritarian.
{"data":{"__meta":{"device":false,"path":"/creators/athleticKoder"},"/creators/athleticKoder":{"data":{"user":{"id":"1229293267625234432","name":"anshuman","description":"ml @zomato; prev: ai consultant @google","followers_count":18668,"friends_count":879,"statuses_count":9759,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1986673452405923840/5zvrJZmo_normal.jpg","screen_name":"athleticKoder","location":"moon","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/jZ3RbMMTTQ","indices":[0,23]}]}}},"details":{"type":"The Analyst","description":"Anshuman is a deeply analytical machine learning engineer who brilliantly connects complex AI concepts with everyday experiences. His tweets reveal a brain wired to decode intricate technical details while making them accessible and relatable. He's a natural explainer and problem solver who thrives on clarity and insight.","purpose":"To demystify the complexities of machine learning and AI by translating technical jargon into engaging narratives that educate, inform, and inspire both peers and enthusiasts.","beliefs":"Anshuman values precision, intellectual rigor, and clarity of thought. He believes that understanding complex systems requires breaking down layered information into digestible parts and that knowledge sharing drives progress. He embraces the power of data-driven insights and thoughtful curiosity.","facts":"Fun fact: Anshuman masterfully equates romantic relationship dynamics with transformer attention mechanisms, showing his unique ability to blend technical expertise with humor and emotional insight.","strength":"His strengths lie in his exceptional ability to analyze, simplify, and communicate complex AI concepts, turning abstract ideas into relatable stories that resonate widely.","weakness":"His focus on detailed technical explanations might sometimes overwhelm audiences unfamiliar with jargon, potentially limiting broader engagement.","recommendation":"To grow his audience on X, Anshuman should blend his deep technical content with more accessible threads and interactive Q&A sessions, leveraging storytelling and relatable analogies to invite engagement from both experts and curious beginners.","roast":"Anshuman’s tweets are so heavily laden with Transformer math, even his love life seems to be stuck in multi-head attention—he’s got all the right heads, just waiting for the algorithm to optimize dating outcomes!","win":"His tweet on transformer attention mechanisms humorously and insightfully went viral, garnering over 700K views and 12K likes, establishing him as a go-to voice for insightful ML explanations on social media."},"tweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1960924293002567988","view_count":712314,"bookmark_count":5025,"created_at":1756355758000,"favorite_count":12203,"quote_count":298,"reply_count":651,"retweet_count":1061,"user_id_str":"1229293267625234432","conversation_id_str":"1960924293002567988","full_text":"She dumped me last night.\n\nNot because I don't listen. \nNot because I'm always on my phone.\nNot even because I forgot our anniversary (twice).\n\nBut because, \n\nin her exact words: \n\n\"You only pay attention to the parts of what I say that you think are important.\"\n\nI stared at her for a moment and realized...\n\nShe just perfectly described the attention mechanism in transformers.\n\nTurns out I wasn't being a bad boyfriend. I was being mathematically optimal.\n\nSee, in conversations (and transformers), you don't give equal weight to every word. Some words matter more for understanding context. Attention figures out exactly HOW important each word should be.\n\nHere's the beautiful math:\n\nAttention(Q, K, V) = softmax(QK^T / √d_k)V\n\nBreaking it down:\n\nQ (Query): \"What am I looking for?\"\nK (Key): \"What info is available?\"\nV (Value): \"What is that info?\"\nd_k: Key dimension (for scaling)\n\nThink library analogy: \n\nYou have a question (Query). Books have titles (Keys) and content (Values). Attention finds which books are most relevant.\n\nStep-by-step with \"The cat sat on the mat\":\n\nStep 1: Create Q, K, VEach word → three vectors via learned matrices W_Q, W_K, W_V\nFor \"cat\":\n\nQuery: \"What should I attend to when processing 'cat'?\"\nKey: \"I am 'cat'\"\nValue: \"Here's cat info\"\n\nStep 2: Calculate scoresQK^T = how much each word should attend to others\n\nProcessing \"sat\"? High similarity with \"cat\" (cats sit) and \"mat\" (where sitting happens).\n\nStep 3: Scale by √d_kPrevents dot products from getting too large, keeps softmax balanced.\n\nStep 4: SoftmaxConverts scores to probabilities:\n\n\"cat\": 0.4 (subject)\n\"sat\": 0.3 (action)\n\"mat\": 0.2 (location)\n\"on\": 0.1 (preposition)\n\"the\": 0.1 (article)\n\nStep 5: Weight valuesMultiply each word's value by attention weight, sum up. Now \"sat\" knows it's most related to \"cat\" and \"mat\".\n\nMulti-Head Magic:Transformers do this multiple times in parallel:\n\nHead 1: Subject-verb relationships\nHead 2: Spatial (\"on\", \"in\", \"under\")\nHead 3: Temporal (\"before\", \"after\")\nHead 4: Semantic similarity\n\nEach head learns different relationship types.\n\nWhy This Changed Everything:\n\nBefore: RNNs = reading with flashlight (one word at a time, forget the beginning)\n\nAfter: Attention = floodlights on entire sentence with dimmer switches\nThis is why ChatGPT can:\n\nRemember 50 messages ago\n\nKnow \"it\" refers to something specific\nUnderstand \"bank\" = money vs river based on context\nThe Kicker:Models learn these patterns from data alone. Nobody programmed grammar rules. It figured out language structure just by predicting next words.\nAttention is how AI learned to read between the lines.\n\nJust like my therapist helped me understand my focus patterns, maybe understanding transformers helps us see how we decide what matters.\n\nNow if only I could implement multi-head attention in dating... 🤖\n\nStill waiting for \"scaled dot-product listening\" to be invented.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,26],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/tsmHx7q0Q2","expanded_url":"https://x.com/anshuizme/status/1869368439636422796/photo/1","id_str":"1869357150931415040","indices":[27,50],"media_key":"3_1869357150931415040","media_url_https":"https://pbs.twimg.com/media/GfFKbKuacAAjtxf.jpg","type":"photo","url":"https://t.co/tsmHx7q0Q2","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":772,"y":316,"h":618,"w":618}]},"medium":{"faces":[{"x":452,"y":185,"h":362,"w":362}]},"small":{"faces":[{"x":256,"y":104,"h":205,"w":205}]},"orig":{"faces":[{"x":772,"y":316,"h":618,"w":618}]}},"sizes":{"large":{"h":1536,"w":2048,"resize":"fit"},"medium":{"h":900,"w":1200,"resize":"fit"},"small":{"h":510,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1536,"width":2048,"focus_rects":[{"x":0,"y":0,"w":2048,"h":1147},{"x":307,"y":0,"w":1536,"h":1536},{"x":402,"y":0,"w":1347,"h":1536},{"x":691,"y":0,"w":768,"h":1536},{"x":0,"y":0,"w":2048,"h":1536}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1869357150931415040"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/tsmHx7q0Q2","expanded_url":"https://x.com/anshuizme/status/1869368439636422796/photo/1","id_str":"1869357150931415040","indices":[27,50],"media_key":"3_1869357150931415040","media_url_https":"https://pbs.twimg.com/media/GfFKbKuacAAjtxf.jpg","type":"photo","url":"https://t.co/tsmHx7q0Q2","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":772,"y":316,"h":618,"w":618}]},"medium":{"faces":[{"x":452,"y":185,"h":362,"w":362}]},"small":{"faces":[{"x":256,"y":104,"h":205,"w":205}]},"orig":{"faces":[{"x":772,"y":316,"h":618,"w":618}]}},"sizes":{"large":{"h":1536,"w":2048,"resize":"fit"},"medium":{"h":900,"w":1200,"resize":"fit"},"small":{"h":510,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1536,"width":2048,"focus_rects":[{"x":0,"y":0,"w":2048,"h":1147},{"x":307,"y":0,"w":1536,"h":1536},{"x":402,"y":0,"w":1347,"h":1536},{"x":691,"y":0,"w":768,"h":1536},{"x":0,"y":0,"w":2048,"h":1536}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1869357150931415040"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1869368439636422796","view_count":555184,"bookmark_count":6975,"created_at":1734527141000,"favorite_count":8190,"quote_count":63,"reply_count":120,"retweet_count":540,"user_id_str":"1229293267625234432","conversation_id_str":"1869368439636422796","full_text":"this paper changed my life https://t.co/tsmHx7q0Q2","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,223],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1968658985285734523","view_count":352479,"bookmark_count":4614,"created_at":1758199852000,"favorite_count":2978,"quote_count":8,"reply_count":49,"retweet_count":226,"user_id_str":"1229293267625234432","conversation_id_str":"1968658985285734523","full_text":"You're in a ML Engineer interview at Perplexity, and the interviewer asks: \n\n\"Your RAG system is hallucinating in production. How do you diagnose what's broken - the retriever or the generator?\" \n\nHere's how you can answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,181],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1966093753602732192","view_count":319662,"bookmark_count":4184,"created_at":1757588253000,"favorite_count":2898,"quote_count":7,"reply_count":30,"retweet_count":168,"user_id_str":"1229293267625234432","conversation_id_str":"1966093753602732192","full_text":"You're in an ML inference engineer interview at Anthropic, and the interviewer asks:\n\n\"Can you explain speculative decoding and why we'd want to use it?\"\n\nHere's how you can answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1969745303457935654","view_count":193777,"bookmark_count":1465,"created_at":1758458851000,"favorite_count":2701,"quote_count":74,"reply_count":104,"retweet_count":262,"user_id_str":"1229293267625234432","conversation_id_str":"1969745303457935654","full_text":"She dumped me last night.\n\nNot because I don't listen. \nNot because I'm always on my phone.\nNot even because I forgot our anniversary (twice).\n\nBut because, \n\nin her exact words: \n\n\"You only pay attention to the parts of what I say that you think are important.\"\n\nI stared at her for a moment and realized...\n\nShe just perfectly described the attention mechanism in transformers.\n\nTurns out I wasn't being a bad boyfriend. I was being mathematically optimal.\n\nSee, in conversations (and transformers), you don't give equal weight to every word. Some words matter more for understanding context. Attention figures out exactly HOW important each word should be.\n\nHere's the beautiful math:\n\nAttention(Q, K, V) = softmax(QK^T / √d_k)V\n\nBreaking it down:\n\nQ (Query): \"What am I looking for?\"\nK (Key): \"What info is available?\"\nV (Value): \"What is that info?\"\nd_k: Key dimension (for scaling)\n\nThink library analogy: \n\nYou have a question (Query). Books have titles (Keys) and content (Values). Attention finds which books are most relevant.\n\nStep-by-step with \"The cat sat on the mat\":\n\nStep 1: Create Q, K, VEach word → three vectors via learned matrices W_Q, W_K, W_V\nFor \"cat\":\n\nQuery: \"What should I attend to when processing 'cat'?\"\nKey: \"I am 'cat'\"\nValue: \"Here's cat info\"\n\nStep 2: Calculate scoresQK^T = how much each word should attend to others\n\nProcessing \"sat\"? High similarity with \"cat\" (cats sit) and \"mat\" (where sitting happens).\n\nStep 3: Scale by √d_kPrevents dot products from getting too large, keeps softmax balanced.\n\nStep 4: SoftmaxConverts scores to probabilities:\n\n\"cat\": 0.4 (subject)\n\"sat\": 0.3 (action)\n\"mat\": 0.2 (location)\n\"on\": 0.1 (preposition)\n\"the\": 0.1 (article)\n\nStep 5: Weight valuesMultiply each word's value by attention weight, sum up. Now \"sat\" knows it's most related to \"cat\" and \"mat\".\n\nMulti-Head Magic:Transformers do this multiple times in parallel:\n\nHead 1: Subject-verb relationships\nHead 2: Spatial (\"on\", \"in\", \"under\")\nHead 3: Temporal (\"before\", \"after\")\nHead 4: Semantic similarity\n\nEach head learns different relationship types.\n\nWhy This Changed Everything:\n\nBefore: RNNs = reading with flashlight (one word at a time, forget the beginning)\n\nAfter: Attention = floodlights on entire sentence with dimmer switches\nThis is why ChatGPT can:\n\nRemember 50 messages ago\n\nKnow \"it\" refers to something specific\nUnderstand \"bank\" = money vs river based on context\nThe Kicker:Models learn these patterns from data alone. Nobody programmed grammar rules. It figured out language structure just by predicting next words.\nAttention is how AI learned to read between the lines.\n\nJust like my therapist helped me understand my focus patterns, maybe understanding transformers helps us see how we decide what matters.\n\nNow if only I could implement multi-head attention in dating... 🤖\n\nStill waiting for \"scaled dot-product listening\" to be invented.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"lang":"en","retweeted":false,"fact_check":null,"id":"1966849848688205843","view_count":281928,"bookmark_count":2512,"created_at":1757768520000,"favorite_count":1809,"quote_count":18,"reply_count":101,"retweet_count":147,"user_id_str":"1229293267625234432","conversation_id_str":"1966849848688205843","full_text":"I rejected a job offer yesterday. \n\nNot because of the salary. \nNot because of the tech stack. \nNot even because of the long hours they warned me about. \n\nBut because, when I asked how they evaluate their AI systems, the hiring manager said: \n\n\"We just ask it some questions and see if the answers sound right.\"\n\nI stared at them for a moment and realized... They just described the biggest problem in AI today.\n\nSee, \"sounds right\" isn't a measurement. \n\nIt's a hope.\n\nHere's what proper LLM evaluation actually looks like:\n\n- Accuracy: Can it get factual questions right? (Not 80% of the time. Consistently.)\n\n- Hallucination rate: How often does it make things up? (This should be near zero for critical applications.)\n\n- Bias metrics: Does it treat all groups fairly? (Measured across demographics, not assumed.)\n\nReal Evaluation Frameworks:\n\n- BLEU scores for translation quality Perplexity for language modeling Human evaluation with inter-annotator agreement Adversarial testing (red teaming) Domain-specific benchmarks (legal, medical, financial)\n\nThe Process:\n\n> Define success criteria BEFORE deployment\n> Create diverse test sets (not just happy paths)\n> Measure consistently across model versions\n> Track performance over time (models drift)\n\nHave humans validate edge cases\n\nWhy This Matters: \nBefore proper evals: \"Our model is amazing!\" \n(based on cherry-picked examples) \nAfter proper evals: \"Our AI achieves 94.2% accuracy on domain X, with known failure modes Y and Z\"\n\nThe difference? One builds trust. The other destroys it when reality hits.\n\nThe kicker: Most companies are still in the \"sounds right\" phase. They're deploying models evaluated by vibes, not metrics.\n\nJust like you wouldn't join a team that deploys code without tests, you shouldn't join one that deploys AI without proper evaluation.\n\nWhat's your experience with LLM evaluation? \nAre we measuring what actually matters?","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,195],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1968305260847632601","view_count":133707,"bookmark_count":2192,"created_at":1758115518000,"favorite_count":1529,"quote_count":6,"reply_count":21,"retweet_count":95,"user_id_str":"1229293267625234432","conversation_id_str":"1968305260847632601","full_text":"You're in a ML Engineer interview at Groq, and the interviewer asks: \n\n\"How do you measure LLM inference performance? What metrics matter most for production systems?\" \n\nHere's how you can answer","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,59],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/cyexixding","expanded_url":"https://x.com/athleticKoder/status/1976161545382932932/photo/1","id_str":"1976161523715203072","indices":[60,83],"media_key":"3_1976161523715203072","media_url_https":"https://pbs.twimg.com/media/G2y8b4fW4AACT0j.png","type":"photo","url":"https://t.co/cyexixding","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":394,"w":700,"resize":"fit"},"medium":{"h":394,"w":700,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":394,"width":700,"focus_rects":[{"x":0,"y":0,"w":700,"h":392},{"x":306,"y":0,"w":394,"h":394},{"x":354,"y":0,"w":346,"h":394},{"x":503,"y":0,"w":197,"h":394},{"x":0,"y":0,"w":700,"h":394}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1976161523715203072"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/cyexixding","expanded_url":"https://x.com/athleticKoder/status/1976161545382932932/photo/1","id_str":"1976161523715203072","indices":[60,83],"media_key":"3_1976161523715203072","media_url_https":"https://pbs.twimg.com/media/G2y8b4fW4AACT0j.png","type":"photo","url":"https://t.co/cyexixding","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":394,"w":700,"resize":"fit"},"medium":{"h":394,"w":700,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":394,"width":700,"focus_rects":[{"x":0,"y":0,"w":700,"h":392},{"x":306,"y":0,"w":394,"h":394},{"x":354,"y":0,"w":346,"h":394},{"x":503,"y":0,"w":197,"h":394},{"x":0,"y":0,"w":700,"h":394}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1976161523715203072"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1976161545382932932","view_count":162621,"bookmark_count":105,"created_at":1759988602000,"favorite_count":1483,"quote_count":4,"reply_count":114,"retweet_count":11,"user_id_str":"1229293267625234432","conversation_id_str":"1976161545382932932","full_text":"career update: joined zomato as Machine Learning Engineer 2 https://t.co/cyexixding","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1979163202844754396","view_count":58628,"bookmark_count":1427,"created_at":1760704253000,"favorite_count":1301,"quote_count":5,"reply_count":10,"retweet_count":173,"user_id_str":"1229293267625234432","conversation_id_str":"1979163202844754396","full_text":"Techniques I’d master if I wanted to make LLMs faster + cheaper. \n\n1. Quantization\n2. KV-Cache Quantization\n3. Flash Attention\n4. Speculative Decoding\n5. LoRA\n6. Pruning\n7. Knowledge Distillation\n8. Weight Sharing\n9. Sparse Attention\n10. Batching & Dynamic Batching\n11. Model Serving Optimization\n12. Tensor Parallelism\n13. Pipeline Parallelism\n14. Paged Attention\n15. Mixed Precision Inference\n16. Early Exit / Token-Level Pruning","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,179],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1972649148257312894","view_count":147739,"bookmark_count":2044,"created_at":1759151181000,"favorite_count":1267,"quote_count":4,"reply_count":39,"retweet_count":113,"user_id_str":"1229293267625234432","conversation_id_str":"1972649148257312894","full_text":"You’re in a AI Engineer interview at Microsoft, and the interviewer asks:\n\n‘Our team needs to build RAG over 10M documents. Which vector database and why?’\n\nHere’s how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,48],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1891814775274995837","view_count":83237,"bookmark_count":158,"created_at":1739878765000,"favorite_count":1178,"quote_count":14,"reply_count":76,"retweet_count":46,"user_id_str":"1229293267625234432","conversation_id_str":"1891814775274995837","full_text":"software Engineers have a runway of 5 years left","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,196],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1969025658052395058","view_count":115506,"bookmark_count":1624,"created_at":1758287274000,"favorite_count":1145,"quote_count":3,"reply_count":22,"retweet_count":92,"user_id_str":"1229293267625234432","conversation_id_str":"1969025658052395058","full_text":"You're in a ML Engineer interview at Anthropic, and the interviewer asks: \n\n\"Your LLM inference is running out of GPU memory with long conversations. How do you fix this?\" \n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1978800764672757958","view_count":52460,"bookmark_count":2034,"created_at":1760617841000,"favorite_count":1099,"quote_count":2,"reply_count":13,"retweet_count":113,"user_id_str":"1229293267625234432","conversation_id_str":"1978800764672757958","full_text":"ML concepts every data scientist should know for interviews:\n\nBookmark this.\n\n1. Bias-Variance Tradeoff\n2. Cross-Validation Strategies\n3. Regularization (L1, L2, Elastic Net)\n4. Class Imbalance & Sampling Techniques\n5. Feature Engineering & Selection\n6. Overfitting vs Underfitting\n7. Evaluation Metrics (beyond accuracy)\n8. Hyperparameter Tuning\n9. Train-Test Data Leakage\n10. Ensemble Methods\n11. Dimensionality Reduction\n12. Model Interpretability (SHAP, LIME)\n13. Gradient Descent Variants\n14. Activation Functions & Neural Networks\n15. Imbalanced Dataset Handling\n16. Production Model Monitoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}],"ctweets":[{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1960924293002567988","view_count":712314,"bookmark_count":5025,"created_at":1756355758000,"favorite_count":12203,"quote_count":298,"reply_count":651,"retweet_count":1061,"user_id_str":"1229293267625234432","conversation_id_str":"1960924293002567988","full_text":"She dumped me last night.\n\nNot because I don't listen. \nNot because I'm always on my phone.\nNot even because I forgot our anniversary (twice).\n\nBut because, \n\nin her exact words: \n\n\"You only pay attention to the parts of what I say that you think are important.\"\n\nI stared at her for a moment and realized...\n\nShe just perfectly described the attention mechanism in transformers.\n\nTurns out I wasn't being a bad boyfriend. I was being mathematically optimal.\n\nSee, in conversations (and transformers), you don't give equal weight to every word. Some words matter more for understanding context. Attention figures out exactly HOW important each word should be.\n\nHere's the beautiful math:\n\nAttention(Q, K, V) = softmax(QK^T / √d_k)V\n\nBreaking it down:\n\nQ (Query): \"What am I looking for?\"\nK (Key): \"What info is available?\"\nV (Value): \"What is that info?\"\nd_k: Key dimension (for scaling)\n\nThink library analogy: \n\nYou have a question (Query). Books have titles (Keys) and content (Values). Attention finds which books are most relevant.\n\nStep-by-step with \"The cat sat on the mat\":\n\nStep 1: Create Q, K, VEach word → three vectors via learned matrices W_Q, W_K, W_V\nFor \"cat\":\n\nQuery: \"What should I attend to when processing 'cat'?\"\nKey: \"I am 'cat'\"\nValue: \"Here's cat info\"\n\nStep 2: Calculate scoresQK^T = how much each word should attend to others\n\nProcessing \"sat\"? High similarity with \"cat\" (cats sit) and \"mat\" (where sitting happens).\n\nStep 3: Scale by √d_kPrevents dot products from getting too large, keeps softmax balanced.\n\nStep 4: SoftmaxConverts scores to probabilities:\n\n\"cat\": 0.4 (subject)\n\"sat\": 0.3 (action)\n\"mat\": 0.2 (location)\n\"on\": 0.1 (preposition)\n\"the\": 0.1 (article)\n\nStep 5: Weight valuesMultiply each word's value by attention weight, sum up. Now \"sat\" knows it's most related to \"cat\" and \"mat\".\n\nMulti-Head Magic:Transformers do this multiple times in parallel:\n\nHead 1: Subject-verb relationships\nHead 2: Spatial (\"on\", \"in\", \"under\")\nHead 3: Temporal (\"before\", \"after\")\nHead 4: Semantic similarity\n\nEach head learns different relationship types.\n\nWhy This Changed Everything:\n\nBefore: RNNs = reading with flashlight (one word at a time, forget the beginning)\n\nAfter: Attention = floodlights on entire sentence with dimmer switches\nThis is why ChatGPT can:\n\nRemember 50 messages ago\n\nKnow \"it\" refers to something specific\nUnderstand \"bank\" = money vs river based on context\nThe Kicker:Models learn these patterns from data alone. Nobody programmed grammar rules. It figured out language structure just by predicting next words.\nAttention is how AI learned to read between the lines.\n\nJust like my therapist helped me understand my focus patterns, maybe understanding transformers helps us see how we decide what matters.\n\nNow if only I could implement multi-head attention in dating... 🤖\n\nStill waiting for \"scaled dot-product listening\" to be invented.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,26],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/tsmHx7q0Q2","expanded_url":"https://x.com/anshuizme/status/1869368439636422796/photo/1","id_str":"1869357150931415040","indices":[27,50],"media_key":"3_1869357150931415040","media_url_https":"https://pbs.twimg.com/media/GfFKbKuacAAjtxf.jpg","type":"photo","url":"https://t.co/tsmHx7q0Q2","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":772,"y":316,"h":618,"w":618}]},"medium":{"faces":[{"x":452,"y":185,"h":362,"w":362}]},"small":{"faces":[{"x":256,"y":104,"h":205,"w":205}]},"orig":{"faces":[{"x":772,"y":316,"h":618,"w":618}]}},"sizes":{"large":{"h":1536,"w":2048,"resize":"fit"},"medium":{"h":900,"w":1200,"resize":"fit"},"small":{"h":510,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1536,"width":2048,"focus_rects":[{"x":0,"y":0,"w":2048,"h":1147},{"x":307,"y":0,"w":1536,"h":1536},{"x":402,"y":0,"w":1347,"h":1536},{"x":691,"y":0,"w":768,"h":1536},{"x":0,"y":0,"w":2048,"h":1536}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1869357150931415040"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/tsmHx7q0Q2","expanded_url":"https://x.com/anshuizme/status/1869368439636422796/photo/1","id_str":"1869357150931415040","indices":[27,50],"media_key":"3_1869357150931415040","media_url_https":"https://pbs.twimg.com/media/GfFKbKuacAAjtxf.jpg","type":"photo","url":"https://t.co/tsmHx7q0Q2","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":772,"y":316,"h":618,"w":618}]},"medium":{"faces":[{"x":452,"y":185,"h":362,"w":362}]},"small":{"faces":[{"x":256,"y":104,"h":205,"w":205}]},"orig":{"faces":[{"x":772,"y":316,"h":618,"w":618}]}},"sizes":{"large":{"h":1536,"w":2048,"resize":"fit"},"medium":{"h":900,"w":1200,"resize":"fit"},"small":{"h":510,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1536,"width":2048,"focus_rects":[{"x":0,"y":0,"w":2048,"h":1147},{"x":307,"y":0,"w":1536,"h":1536},{"x":402,"y":0,"w":1347,"h":1536},{"x":691,"y":0,"w":768,"h":1536},{"x":0,"y":0,"w":2048,"h":1536}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1869357150931415040"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1869368439636422796","view_count":555184,"bookmark_count":6975,"created_at":1734527141000,"favorite_count":8190,"quote_count":63,"reply_count":120,"retweet_count":540,"user_id_str":"1229293267625234432","conversation_id_str":"1869368439636422796","full_text":"this paper changed my life https://t.co/tsmHx7q0Q2","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,59],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/cyexixding","expanded_url":"https://x.com/athleticKoder/status/1976161545382932932/photo/1","id_str":"1976161523715203072","indices":[60,83],"media_key":"3_1976161523715203072","media_url_https":"https://pbs.twimg.com/media/G2y8b4fW4AACT0j.png","type":"photo","url":"https://t.co/cyexixding","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":394,"w":700,"resize":"fit"},"medium":{"h":394,"w":700,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":394,"width":700,"focus_rects":[{"x":0,"y":0,"w":700,"h":392},{"x":306,"y":0,"w":394,"h":394},{"x":354,"y":0,"w":346,"h":394},{"x":503,"y":0,"w":197,"h":394},{"x":0,"y":0,"w":700,"h":394}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1976161523715203072"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/cyexixding","expanded_url":"https://x.com/athleticKoder/status/1976161545382932932/photo/1","id_str":"1976161523715203072","indices":[60,83],"media_key":"3_1976161523715203072","media_url_https":"https://pbs.twimg.com/media/G2y8b4fW4AACT0j.png","type":"photo","url":"https://t.co/cyexixding","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":394,"w":700,"resize":"fit"},"medium":{"h":394,"w":700,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":394,"width":700,"focus_rects":[{"x":0,"y":0,"w":700,"h":392},{"x":306,"y":0,"w":394,"h":394},{"x":354,"y":0,"w":346,"h":394},{"x":503,"y":0,"w":197,"h":394},{"x":0,"y":0,"w":700,"h":394}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1976161523715203072"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1976161545382932932","view_count":162621,"bookmark_count":105,"created_at":1759988602000,"favorite_count":1483,"quote_count":4,"reply_count":114,"retweet_count":11,"user_id_str":"1229293267625234432","conversation_id_str":"1976161545382932932","full_text":"career update: joined zomato as Machine Learning Engineer 2 https://t.co/cyexixding","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1969745303457935654","view_count":193777,"bookmark_count":1465,"created_at":1758458851000,"favorite_count":2701,"quote_count":74,"reply_count":104,"retweet_count":262,"user_id_str":"1229293267625234432","conversation_id_str":"1969745303457935654","full_text":"She dumped me last night.\n\nNot because I don't listen. \nNot because I'm always on my phone.\nNot even because I forgot our anniversary (twice).\n\nBut because, \n\nin her exact words: \n\n\"You only pay attention to the parts of what I say that you think are important.\"\n\nI stared at her for a moment and realized...\n\nShe just perfectly described the attention mechanism in transformers.\n\nTurns out I wasn't being a bad boyfriend. I was being mathematically optimal.\n\nSee, in conversations (and transformers), you don't give equal weight to every word. Some words matter more for understanding context. Attention figures out exactly HOW important each word should be.\n\nHere's the beautiful math:\n\nAttention(Q, K, V) = softmax(QK^T / √d_k)V\n\nBreaking it down:\n\nQ (Query): \"What am I looking for?\"\nK (Key): \"What info is available?\"\nV (Value): \"What is that info?\"\nd_k: Key dimension (for scaling)\n\nThink library analogy: \n\nYou have a question (Query). Books have titles (Keys) and content (Values). Attention finds which books are most relevant.\n\nStep-by-step with \"The cat sat on the mat\":\n\nStep 1: Create Q, K, VEach word → three vectors via learned matrices W_Q, W_K, W_V\nFor \"cat\":\n\nQuery: \"What should I attend to when processing 'cat'?\"\nKey: \"I am 'cat'\"\nValue: \"Here's cat info\"\n\nStep 2: Calculate scoresQK^T = how much each word should attend to others\n\nProcessing \"sat\"? High similarity with \"cat\" (cats sit) and \"mat\" (where sitting happens).\n\nStep 3: Scale by √d_kPrevents dot products from getting too large, keeps softmax balanced.\n\nStep 4: SoftmaxConverts scores to probabilities:\n\n\"cat\": 0.4 (subject)\n\"sat\": 0.3 (action)\n\"mat\": 0.2 (location)\n\"on\": 0.1 (preposition)\n\"the\": 0.1 (article)\n\nStep 5: Weight valuesMultiply each word's value by attention weight, sum up. Now \"sat\" knows it's most related to \"cat\" and \"mat\".\n\nMulti-Head Magic:Transformers do this multiple times in parallel:\n\nHead 1: Subject-verb relationships\nHead 2: Spatial (\"on\", \"in\", \"under\")\nHead 3: Temporal (\"before\", \"after\")\nHead 4: Semantic similarity\n\nEach head learns different relationship types.\n\nWhy This Changed Everything:\n\nBefore: RNNs = reading with flashlight (one word at a time, forget the beginning)\n\nAfter: Attention = floodlights on entire sentence with dimmer switches\nThis is why ChatGPT can:\n\nRemember 50 messages ago\n\nKnow \"it\" refers to something specific\nUnderstand \"bank\" = money vs river based on context\nThe Kicker:Models learn these patterns from data alone. Nobody programmed grammar rules. It figured out language structure just by predicting next words.\nAttention is how AI learned to read between the lines.\n\nJust like my therapist helped me understand my focus patterns, maybe understanding transformers helps us see how we decide what matters.\n\nNow if only I could implement multi-head attention in dating... 🤖\n\nStill waiting for \"scaled dot-product listening\" to be invented.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"lang":"en","retweeted":false,"fact_check":null,"id":"1966849848688205843","view_count":281928,"bookmark_count":2512,"created_at":1757768520000,"favorite_count":1809,"quote_count":18,"reply_count":101,"retweet_count":147,"user_id_str":"1229293267625234432","conversation_id_str":"1966849848688205843","full_text":"I rejected a job offer yesterday. \n\nNot because of the salary. \nNot because of the tech stack. \nNot even because of the long hours they warned me about. \n\nBut because, when I asked how they evaluate their AI systems, the hiring manager said: \n\n\"We just ask it some questions and see if the answers sound right.\"\n\nI stared at them for a moment and realized... They just described the biggest problem in AI today.\n\nSee, \"sounds right\" isn't a measurement. \n\nIt's a hope.\n\nHere's what proper LLM evaluation actually looks like:\n\n- Accuracy: Can it get factual questions right? (Not 80% of the time. Consistently.)\n\n- Hallucination rate: How often does it make things up? (This should be near zero for critical applications.)\n\n- Bias metrics: Does it treat all groups fairly? (Measured across demographics, not assumed.)\n\nReal Evaluation Frameworks:\n\n- BLEU scores for translation quality Perplexity for language modeling Human evaluation with inter-annotator agreement Adversarial testing (red teaming) Domain-specific benchmarks (legal, medical, financial)\n\nThe Process:\n\n> Define success criteria BEFORE deployment\n> Create diverse test sets (not just happy paths)\n> Measure consistently across model versions\n> Track performance over time (models drift)\n\nHave humans validate edge cases\n\nWhy This Matters: \nBefore proper evals: \"Our model is amazing!\" \n(based on cherry-picked examples) \nAfter proper evals: \"Our AI achieves 94.2% accuracy on domain X, with known failure modes Y and Z\"\n\nThe difference? One builds trust. The other destroys it when reality hits.\n\nThe kicker: Most companies are still in the \"sounds right\" phase. They're deploying models evaluated by vibes, not metrics.\n\nJust like you wouldn't join a team that deploys code without tests, you shouldn't join one that deploys AI without proper evaluation.\n\nWhat's your experience with LLM evaluation? \nAre we measuring what actually matters?","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,48],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1891814775274995837","view_count":83237,"bookmark_count":158,"created_at":1739878765000,"favorite_count":1178,"quote_count":14,"reply_count":76,"retweet_count":46,"user_id_str":"1229293267625234432","conversation_id_str":"1891814775274995837","full_text":"software Engineers have a runway of 5 years left","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,223],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1968658985285734523","view_count":352479,"bookmark_count":4614,"created_at":1758199852000,"favorite_count":2978,"quote_count":8,"reply_count":49,"retweet_count":226,"user_id_str":"1229293267625234432","conversation_id_str":"1968658985285734523","full_text":"You're in a ML Engineer interview at Perplexity, and the interviewer asks: \n\n\"Your RAG system is hallucinating in production. How do you diagnose what's broken - the retriever or the generator?\" \n\nHere's how you can answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,179],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1972649148257312894","view_count":147739,"bookmark_count":2044,"created_at":1759151181000,"favorite_count":1267,"quote_count":4,"reply_count":39,"retweet_count":113,"user_id_str":"1229293267625234432","conversation_id_str":"1972649148257312894","full_text":"You’re in a AI Engineer interview at Microsoft, and the interviewer asks:\n\n‘Our team needs to build RAG over 10M documents. Which vector database and why?’\n\nHere’s how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,181],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1966093753602732192","view_count":319662,"bookmark_count":4184,"created_at":1757588253000,"favorite_count":2898,"quote_count":7,"reply_count":30,"retweet_count":168,"user_id_str":"1229293267625234432","conversation_id_str":"1966093753602732192","full_text":"You're in an ML inference engineer interview at Anthropic, and the interviewer asks:\n\n\"Can you explain speculative decoding and why we'd want to use it?\"\n\nHere's how you can answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1971551880552829265","view_count":37374,"bookmark_count":443,"created_at":1758889572000,"favorite_count":513,"quote_count":4,"reply_count":24,"retweet_count":41,"user_id_str":"1229293267625234432","conversation_id_str":"1971551880552829265","full_text":"A girl at my gym approached me after her workout, clearly annoyed. \n\n\"I've been watching and copying your entire routine for weeks, but I'm not seeing the same improvements you are!\" \n\nI explained, \"You can't just mimic what I do - you need to understand which exercises deserve more focus for your specific goals.\" \n\nShe nodded.\n\nAnd then she said, \"Wait, isn't that like attention mechanism in ChatGPT? \" \n\nAnd I know you're sitting there like: WTF is Attention Mechanism?\n\nAttention Mechanism is like that gym bro who knows exactly which exercises deserve maximum effort during each workout. \n\nHow does it work in LLMs?\n\nYou feed a sentence with multiple words to the model\nEach word \"examines\" ALL other words in the sentence\nIt calculates \"how much attention should I pay to each word?\"\n\nCreates weighted connections based on relevance\nImportant words get higher attention scores, others get ignored\n\nThe Complete Math:\n\nStep 1: Create Query, Key, and Value matrices\n\nQuery (Q) = What am I looking for?\nKey (K) = What information is available?\nValue (V) = The actual content to extract\n\nFor each word position i:\n\nQ_i = X_i × W_Q (input × query weight matrix)\nK_i = X_i × W_K (input × key weight matrix)\nV_i = X_i × W_V (input × value weight matrix)\n\nStep 2: Calculate Attention Scores Score(i,j) = Q_i × K_j^T\nThis tells us how much word i should pay attention to word j.\n\nStep 3: Scale the scores Scaled_Score = Score / √d_k\nWhere d_k is the dimension of the key vectors (prevents exploding gradients).\n\nStep 4: Apply Softmax Attention_Weight(i,j) = Softmax(Scaled_Score(i,j))\nSoftmax formula: e^(x_i) / Σ(e^(x_k)) for all k\nThis ensures all attention weights sum to 1.\n\nStep 5: Weighted Sum Output_i = Σ(Attention_Weight(i,j) × V_j) for all j\nComplete Formula: Attention(Q,K,V) = Softmax(QK^T / √d_k)V\n\nSentence: \"She wants to deadlift heavy weights\" Let's say we have 3-dimensional embeddings (simplified):\nWord Embeddings:\nShe = [1, 0, 0]\nwants = [0, 1, 0]\ndeadlift = [1, 1, 1]\nheavy = [0, 0, 1]\nweights = [1, 0, 1]\nWhen processing \"deadlift\":\nQuery for \"deadlift\" = [1, 1, 1]\n\nCalculate dot products (attention scores):\n\ndeadlift → She: [1,1,1] · [1,0,0] = 1\ndeadlift → wants: [1,1,1] · [0,1,0] = 1\ndeadlift → deadlift: [1,1,1] · [1,1,1] = 3\ndeadlift → heavy: [1,1,1] · [0,0,1] = 1\ndeadlift → weights: [1,1,1] · [1,0,1] = 2\n\nRaw scores: [1, 1, 3, 1, 2]\n\nAfter Softmax:\n\nShe: e^1/(e^1+e^1+e^3+e^1+e^2) = 0.04\nwants: 0.04\ndeadlift: e^3/(total) = 0.66\n\nheavy: 0.04\nweights: e^2/(total) = 0.22\n\nFinal attention weights: [0.04, 0.04, 0.66, 0.04, 0.22]\n\nMulti-Head Attention (the gym analogy):\n\nThink of it like having multiple personal trainers, each focusing on different aspects:\n\nHead 1: Focuses on exercise form and technique\nHead 2: Focuses on muscle groups being targeted\nHead 3: Focuses on safety and proper progression\n\nEach head has its own Q, K, V matrices and calculates attention independently, then results are concatenated.\n\nMathematical representation: \n\nMultiHead(Q,K,V) = Concat(head_1, head_2, ..., head_h) × W_O\n\nWhere each head_i = Attention(QW_i^Q, KW_i^K, VW_i^V)\n\nWhy this revolutionized NLP:\n\n> Context Understanding – Mathematical precision in determining word relationships\n\n> Parallel Processing – All attention scores calculated simultaneously, not sequentially\n\n> Gradient Flow – Softmax ensures smooth gradients for training\n\n> Scalability – Works efficiently with sequences of any length\n\nFinal Result: Attention Mechanism gave AI mathematical precision in focusing on what matters - just like how you calculate exactly which muscle groups need the most work based on your goals!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,196],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1969025658052395058","view_count":115506,"bookmark_count":1624,"created_at":1758287274000,"favorite_count":1145,"quote_count":3,"reply_count":22,"retweet_count":92,"user_id_str":"1229293267625234432","conversation_id_str":"1969025658052395058","full_text":"You're in a ML Engineer interview at Anthropic, and the interviewer asks: \n\n\"Your LLM inference is running out of GPU memory with long conversations. How do you fix this?\" \n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/uPnsTVY6O9","expanded_url":"https://x.com/anshuizme/status/1868333926676373540/photo/1","id_str":"1868333915133624320","indices":[275,298],"media_key":"3_1868333915133624320","media_url_https":"https://pbs.twimg.com/media/Ge2nzAVawAAzYpJ.jpg","type":"photo","url":"https://t.co/uPnsTVY6O9","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":593,"y":775,"h":120,"w":120},{"x":756,"y":1123,"h":117,"w":117}]},"medium":{"faces":[{"x":445,"y":581,"h":90,"w":90},{"x":567,"y":842,"h":87,"w":87}]},"small":{"faces":[{"x":251,"y":329,"h":50,"w":50},{"x":321,"y":477,"h":49,"w":49}]},"orig":{"faces":[{"x":593,"y":775,"h":120,"w":120},{"x":756,"y":1123,"h":117,"w":117}]}},"sizes":{"large":{"h":1600,"w":1066,"resize":"fit"},"medium":{"h":1200,"w":800,"resize":"fit"},"small":{"h":680,"w":453,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1600,"width":1066,"focus_rects":[{"x":0,"y":622,"w":1066,"h":597},{"x":0,"y":387,"w":1066,"h":1066},{"x":0,"y":313,"w":1066,"h":1215},{"x":266,"y":0,"w":800,"h":1600},{"x":0,"y":0,"w":1066,"h":1600}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1868333915133624320"}}},{"display_url":"pic.x.com/uPnsTVY6O9","expanded_url":"https://x.com/anshuizme/status/1868333926676373540/photo/1","id_str":"1868333915112693760","indices":[275,298],"media_key":"3_1868333915112693760","media_url_https":"https://pbs.twimg.com/media/Ge2nzAQbYAA318R.jpg","type":"photo","url":"https://t.co/uPnsTVY6O9","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":850,"w":1536,"h":860},{"x":0,"y":512,"w":1536,"h":1536},{"x":0,"y":297,"w":1536,"h":1751},{"x":153,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1868333915112693760"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1267057750044839936","name":"Naina","screen_name":"Naina_2728","indices":[182,193]},{"id_str":"1267057750044839936","name":"Naina","screen_name":"Naina_2728","indices":[182,193]},{"id_str":"1207708520902037506","name":"Manish Sharma 📊 - Away","screen_name":"lucifer_x007","indices":[316,329]},{"id_str":"1280144640704827399","name":"pathik","screen_name":"pathikghugare","indices":[334,348]},{"id_str":"1682338913640386561","name":"Asmita","screen_name":"asmitaakamboj","indices":[410,424]},{"id_str":"1275094772793798658","name":"Archish S","screen_name":"xerefic","indices":[558,566]},{"id_str":"1472903054","name":"Aditya Das","screen_name":"theadityadas","indices":[571,584]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/uPnsTVY6O9","expanded_url":"https://x.com/anshuizme/status/1868333926676373540/photo/1","id_str":"1868333915133624320","indices":[275,298],"media_key":"3_1868333915133624320","media_url_https":"https://pbs.twimg.com/media/Ge2nzAVawAAzYpJ.jpg","type":"photo","url":"https://t.co/uPnsTVY6O9","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":593,"y":775,"h":120,"w":120},{"x":756,"y":1123,"h":117,"w":117}]},"medium":{"faces":[{"x":445,"y":581,"h":90,"w":90},{"x":567,"y":842,"h":87,"w":87}]},"small":{"faces":[{"x":251,"y":329,"h":50,"w":50},{"x":321,"y":477,"h":49,"w":49}]},"orig":{"faces":[{"x":593,"y":775,"h":120,"w":120},{"x":756,"y":1123,"h":117,"w":117}]}},"sizes":{"large":{"h":1600,"w":1066,"resize":"fit"},"medium":{"h":1200,"w":800,"resize":"fit"},"small":{"h":680,"w":453,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1600,"width":1066,"focus_rects":[{"x":0,"y":622,"w":1066,"h":597},{"x":0,"y":387,"w":1066,"h":1066},{"x":0,"y":313,"w":1066,"h":1215},{"x":266,"y":0,"w":800,"h":1600},{"x":0,"y":0,"w":1066,"h":1600}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1868333915133624320"}}},{"display_url":"pic.x.com/uPnsTVY6O9","expanded_url":"https://x.com/anshuizme/status/1868333926676373540/photo/1","id_str":"1868333915112693760","indices":[275,298],"media_key":"3_1868333915112693760","media_url_https":"https://pbs.twimg.com/media/Ge2nzAQbYAA318R.jpg","type":"photo","url":"https://t.co/uPnsTVY6O9","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":850,"w":1536,"h":860},{"x":0,"y":512,"w":1536,"h":1536},{"x":0,"y":297,"w":1536,"h":1751},{"x":153,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1868333915112693760"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1868333926676373540","view_count":5429,"bookmark_count":6,"created_at":1734280494000,"favorite_count":101,"quote_count":0,"reply_count":21,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1868333926676373540","full_text":"I did it. I finally did it.\n\nI ran 10k in Phonpe Midnight Marathon (my first ever).\n\nRoad to 10k was long. I am so grateful to following people for unlocking this version of me:\n\n1. @Naina_2728 for showing me what is possible with running, how fun it can be and how many new connections you can make through it.\n\n2. @lucifer_x007 and @pathikghugare for pushing me always and staying with me for first 2.5k\n\n3. @asmitaakamboj was like a guiding light i guess, seeing her active and posting her progress daily made me believe that consistency is possible.\n\n4. @xerefic and @theadityadas for filling me in the information I needed to make the long run possible\n\n5. Mahendra for encouragement I needed on the D-Day.\n\nThank you soooo much and love you guys❤️","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}],"activities":{"nreplies":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":4,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":9,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":4,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":19,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":3,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":9,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":27,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":5,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":76,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":8,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":46,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":1,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":13,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":52,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":44,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":141,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":1,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nbookmarks":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":7,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":415,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":14,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":326,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":368,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":884,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":1430,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":447,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":4308,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":5,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":637,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":20,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":1,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":1408,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":1,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":1188,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":491,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":5,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nretweets":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":1,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":44,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":1,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":14,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":20,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":50,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":70,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":44,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":145,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":24,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":20,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":4,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":74,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":78,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":40,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nlikes":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":76,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":310,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":19,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":286,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":236,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":595,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":1010,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":331,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":3084,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":159,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":630,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":1,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":37,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":7,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":1288,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":1,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":1504,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":978,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":1,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":38,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nviews":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":6629,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":14812,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":1779,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":25443,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":13544,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":32887,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":194313,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":17078,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":599362,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":23287,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":88316,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":491,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":6564,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":1641,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":215512,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":247,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":216683,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":128518,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":133,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":5362,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}]},"interactions":{"users":[{"created_at":1443053347000,"uid":"3664641493","id":"3664641493","screen_name":"Juicecountyeth","name":"🧪Juice 🧃","friends_count":1574,"followers_count":812,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1675366634188595200/aQEsh6xm_normal.jpg","description":"OG GPU seller.","entities":{"description":{"urls":[]}},"interactions":2},{"created_at":1432022453000,"uid":"3220121588","id":"3220121588","screen_name":"prajpawar23","name":"prajwal","friends_count":2626,"followers_count":859,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1956972838839078913/VdBrWn_q_normal.jpg","description":"22 // ml @qualcomm // prev - gpu engg @amd","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"prajpawar.com","expanded_url":"http://prajpawar.com","url":"https://t.co/HU6vdIoxm1","indices":[0,23]}]}},"interactions":2},{"created_at":1703110076000,"uid":"1737595658909908992","id":"1737595658909908992","screen_name":"RaviRaiML","name":"Ravi | ML Engineer","friends_count":260,"followers_count":1220,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1983911680657473536/yIgKdn0P_normal.jpg","description":"Freelance ML Engineer | Fixing AI products with MLOps","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"ravinderrai.com","expanded_url":"https://ravinderrai.com","url":"https://t.co/cSPDGQswR7","indices":[0,23]}]}},"interactions":2},{"created_at":1568011354000,"uid":"1170950527200292864","id":"1170950527200292864","screen_name":"hrishikesshhhh","name":"Hrishikesh Nikam","friends_count":914,"followers_count":288,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1984199310645600257/gMy9jS48_normal.jpg","description":"20 || 6'2 || CS-22 || Software Developer|| Full-Stack Dev","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"linktr.ee/hrishikesshhhh","expanded_url":"https://linktr.ee/hrishikesshhhh","url":"https://linktr.ee/hrishikesshhhh","indices":[0,23]}]}},"interactions":2},{"created_at":1559131659000,"uid":"1133706467507363840","id":"1133706467507363840","screen_name":"HyunRish","name":"rish_hyun","friends_count":27,"followers_count":0,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1245779132991979520/b3oThewl_normal.jpg","description":"another digital footprint 👣","entities":{"description":{"urls":[]}},"interactions":2},{"created_at":1260413786000,"uid":"95807398","id":"95807398","screen_name":"abhi1thakur","name":"abhishek","friends_count":1094,"followers_count":94889,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1976303094146224128/gXXFSwQw_normal.jpg","description":"AI Search @vespaengine, ex-@huggingface, World's First 4x GM @kaggle, YouTube 120k+: https://t.co/BHnem8fTu5","entities":{"description":{"urls":[{"display_url":"youtube.com/@abhishekkrtha…","expanded_url":"http://youtube.com/@abhishekkrthakur","url":"https://t.co/BHnem8fTu5","indices":[85,108]}]},"url":{"urls":[{"display_url":"linkedin.com/in/abhi1thakur/","expanded_url":"https://www.linkedin.com/in/abhi1thakur/","url":"https://t.co/uEbTUBVvQL","indices":[0,23]}]}},"interactions":1},{"created_at":1502905292000,"uid":"897875988222271488","id":"897875988222271488","screen_name":"_PaperMoose_","name":"Ryan","friends_count":1518,"followers_count":1051,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1988392745619386378/Eh0X86O2_normal.jpg","description":"Built ARC-AGI 2 evals @gregkamrad. Ex-CTO @ DentoAI. Built https://t.co/JtLGCSctWE for Novo Nordisk. Building automated\n reliability testing for healthcare","entities":{"description":{"urls":[{"display_url":"findmymedsapp.com","expanded_url":"http://findmymedsapp.com","url":"https://t.co/JtLGCSctWE","indices":[60,83]}]},"url":{"urls":[{"display_url":"vunda.ai","expanded_url":"https://vunda.ai","url":"https://t.co/8AbP5xJC34","indices":[0,23]}]}},"interactions":1},{"created_at":1502413899000,"uid":"895814938995957760","id":"895814938995957760","screen_name":"threadreaderapp","name":"Thread Reader App","friends_count":1234,"followers_count":785601,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1813321453183590400/lc6jtC3Y_normal.jpg","description":"I'm a 🤖 to help you read threads more easily. Reply to any tweet of a thread and mention me with the \"unroll\" keyword and I'll give you a link back 😀","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"ThreadReaderApp.com","expanded_url":"https://ThreadReaderApp.com","url":"https://t.co/pBpAT7Uy1z","indices":[0,23]}]}},"interactions":1},{"created_at":1500620733000,"uid":"888293854960533504","id":"888293854960533504","screen_name":"leodoan_","name":"Thanh Doan","friends_count":430,"followers_count":349,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1961238073263751168/qrWgPrpN_normal.jpg","description":"software engineer. crafting impactful things to open source world | building overwrite: https://t.co/PCgG9ZSlbu | changelogs: https://t.co/SisBYPqOo0","entities":{"description":{"urls":[{"display_url":"mnismt.com/overwrite","expanded_url":"http://mnismt.com/overwrite","url":"https://t.co/PCgG9ZSlbu","indices":[88,111]},{"display_url":"changelogs.directory","expanded_url":"http://changelogs.directory","url":"https://t.co/SisBYPqOo0","indices":[126,149]}]},"url":{"urls":[{"display_url":"doantranminhthanh.com","expanded_url":"https://doantranminhthanh.com/","url":"https://t.co/v6xaz5R5dB","indices":[0,23]}]}},"interactions":1},{"created_at":1494595384000,"uid":"863021710412570625","id":"863021710412570625","screen_name":"Hunter60505004","name":"Hunter","friends_count":627,"followers_count":92,"profile_image_url_https":"https://abs.twimg.com/sticky/default_profile_images/default_profile_normal.png","description":"","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1492146357000,"uid":"852749744841478144","id":"852749744841478144","screen_name":"Hari1275866","name":"bidda","friends_count":1663,"followers_count":84,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1972577884301910016/rRqzYFft_normal.jpg","description":"GenAI & Data engineering | Tech Enthusiast | Programmer | keen to learn new technology |","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1482037687000,"uid":"810350911465799680","id":"810350911465799680","screen_name":"abtw3t","name":"Ab","friends_count":307,"followers_count":108,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1959493931168903168/azbWngMk_normal.png","description":"robotics + ml | @SAEIntl student","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1459845243000,"uid":"717269055061630977","id":"717269055061630977","screen_name":"dhruv2038","name":"Dhruv","friends_count":5361,"followers_count":4183,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1835847331108773888/2F4xtKIS_normal.jpg","description":".","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1334693025000,"uid":"556239875","id":"556239875","screen_name":"LeventTZ1","name":"LTZ","friends_count":247,"followers_count":24,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1594509331441016864/-al2cc_a_normal.jpg","description":"","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1318500234000,"uid":"390011033","id":"390011033","screen_name":"NiiMante","name":"Nii Mante","friends_count":356,"followers_count":150,"profile_image_url_https":"https://pbs.twimg.com/profile_images/378800000490349094/f4fb4e58182772999b2e7d664329aaf3_normal.jpeg","description":"Engineer. Investor. Traveler","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"greeks-ai.web.app","expanded_url":"http://greeks-ai.web.app","url":"https://t.co/8MsVV3WNrD","indices":[0,23]}]}},"interactions":1},{"created_at":1240916877000,"uid":"36039399","id":"36039399","screen_name":"duborges","name":"Eduardo Borges","friends_count":1678,"followers_count":16505,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1841183953199235073/vu43psbH_normal.jpg","description":"digital entrepreneur since 1997 ≫ saas ≫ mobile apps ≫ chrome extensions ≫ programmatic sites ≫ softwares ≫ chatbots ≫ hacking ≫ AI","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"viralist.ai","expanded_url":"https://viralist.ai","url":"https://t.co/f5WC89NTzI","indices":[0,23]}]}},"interactions":1},{"created_at":1434480708000,"uid":"3330038775","id":"3330038775","screen_name":"joefioti","name":"Joe Fioti","friends_count":414,"followers_count":2175,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1853888417525837824/6XBdEwVs_normal.jpg","description":"it's not possible, it's necessary. building a compiler @luminal_ai (yc s25) to solve inference.","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"luminal.com","expanded_url":"https://luminal.com","url":"https://t.co/bcAyeHGnLm","indices":[0,23]}]}},"interactions":1},{"created_at":1430114943000,"uid":"3177438486","id":"3177438486","screen_name":"junat321","name":"Tanuj Nayak","friends_count":61,"followers_count":7,"profile_image_url_https":"https://abs.twimg.com/sticky/default_profile_images/default_profile_normal.png","description":"","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1409224518000,"uid":"2776199244","id":"2776199244","screen_name":"_willfalcon","name":"William Falcon ⚡️","friends_count":486,"followers_count":15299,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1843085471850893312/MAWDjJ-4_normal.jpg","description":"CEO @LightningAI. Creator, PyTorch Lightning⚡, Former AI PhD student (pretraining, researcher) @metaAI @CILVRatNYU w @kchonyc @ylecun","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"lightning.ai","expanded_url":"http://lightning.ai","url":"https://t.co/4vitCAUqOd","indices":[0,23]}]}},"interactions":1},{"created_at":1410589785000,"uid":"2768652166","id":"2768652166","screen_name":"georgecurtiss","name":"George","friends_count":251,"followers_count":1356,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1988319344158535682/Pi4T5eC9_normal.jpg","description":"CEO at @helixdb | YC X25 | calisthenics enjoyer 😎 | 🇬🇧 | 6’4” | 23\n\nStar the GH! https://t.co/dadvr63vpZ","entities":{"description":{"urls":[{"display_url":"github.com/helixdb/helix-…","expanded_url":"https://github.com/helixdb/helix-db","url":"https://t.co/dadvr63vpZ","indices":[81,104]}]},"url":{"urls":[{"display_url":"helix-db.com","expanded_url":"http://helix-db.com","url":"https://t.co/TvKUaLhNn9","indices":[0,23]}]}},"interactions":1}],"period":14,"start":1762193255367,"end":1763402855367},"interactions_updated":1763402855586,"created":1763402855019,"updated":1763402855586,"type":"the analyst","hits":1},"people":[{"user":{"id":"1832926413483036673","name":"truth.phd","description":"Diversification is a myth. Zapping inefficiencies into tomorrow’s gains. None of the tweets are financial advice, DYOR!","followers_count":3256,"friends_count":235,"statuses_count":21241,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1962336462382174208/yQ2g_2q9_normal.jpg","screen_name":"truthdotphd","location":"Earth","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"truth.phd","expanded_url":"https://truth.phd","url":"https://t.co/YzjNn46J0z","indices":[0,23]}]}}},"details":{"type":"The Analyst","description":"truth.phd is a deep-dive data enthusiast who transforms complex financial landscapes into clear, insightful narratives. With prolific tweeting, they dissect market inefficiencies and stock opportunities, always urging followers to do their own research. Their dedication to rigorous analysis makes them a trusted voice in the investment community.","purpose":"To illuminate the hidden patterns in financial markets and empower individuals to make informed long-term investment decisions through meticulous analysis and transparency.","beliefs":"They believe in cutting through the noise of mainstream financial advice, valuing data-backed insights over hype, and champion the idea that diversification is often overrated when compared to focused, high-conviction investing.","facts":"Fun fact: truth.phd has tweeted over 21,000 times—clearly, they don’t just talk analytics, they live and breathe it every day!","strength":"Exceptional ability to interpret complex financial data into actionable insights and a fearless, consistent voice that embraces transparency and encourages followers to 'do your own research.'","weakness":"Their relentless focus on data and deep analysis might intimidate or overwhelm casual investors and possibly limit broader engagement outside niche communities.","recommendation":"To grow their audience on X, truth.phd should pepper their detailed analyses with bite-sized, easily digestible summaries and engage more interactively with followers through polls or Q&A threads. Highlighting real-world impact stories can also widen appeal beyond hardcore finance fans.","roast":"truth.phd tweets so much, they could single-handedly keep the refresh button on X in business—maybe it's time to take a coffee break before their fingers need their own diversified portfolio.","win":"Building a reputation as a reliable, data-driven financial analyst with one of their high-impact tweets garnering over 273,000 views, establishing truth.phd as a go-to voice for serious market insights."},"created":1763414640759,"type":"the analyst","id":"truthdotphd"},{"user":{"id":"1968660851914850305","name":"Donald-axx","description":"NINETEEN CRYPTO 创始人 \n分析项目/拆解周期/记录交易与情绪\n社区链接:https://t.co/R3T7UP3EmF \npolymarket策略群:https://t.co/8HdhxPn2gS\n\n打狗就用Key👀\nAPP官网下载:https://t.co/8A61BuBGFd","followers_count":252,"friends_count":146,"statuses_count":215,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1968663521799073792/yhoPCtlS_normal.jpg","screen_name":"Donald_axx","location":"China","entities":{"description":{"urls":[{"display_url":"t.me/+1vPCJGh5281jN…","expanded_url":"https://t.me/+1vPCJGh5281jNmJl","url":"https://t.co/R3T7UP3EmF","indices":[44,67]},{"display_url":"t.me/+WuSEEGRTbiNlM…","expanded_url":"https://t.me/+WuSEEGRTbiNlMmJl","url":"https://t.co/8HdhxPn2gS","indices":[87,110]},{"display_url":"key.pro","expanded_url":"http://key.pro","url":"https://t.co/8A61BuBGFd","indices":[129,152]}]}}},"details":{"type":"The Analyst","description":"Donald-axx is a crypto project founder who dives deep into market analysis, project dissection, and the emotional cycles of trading. With a keen focus on data-backed strategies and community insights, he unpacks complex concepts into actionable advice for his followers. His content balances cautious optimism with critical evaluation, making him the go-to voice for crypto enthusiasts seeking clarity amid volatility.","purpose":"To empower the crypto community by demystifying project potential and market behaviors through rigorous analysis, fostering informed decisions and long-term trust in an unpredictable ecosystem.","beliefs":"Donald-axx values transparency, data-driven strategies, and emotional intelligence in trading. He believes that understanding cultural nuances and community engagement is key to amplifying project success. Trust and systematic content output beat mere hype, promoting sustainable growth over quick gains.","facts":"Donald-axx’s sports betting strategy on PolyMarket boasts an 80%+ win rate, turning a modest 100 USD stake into over 20x returns, highlighting his knack for identifying market inefficiencies and emotional price swings.","strength":"His ability to blend detailed project analysis with market sentiment and cultural insights sets him apart, enabling followers to navigate high-risk markets with a balanced and informed approach.","weakness":"He risks being perceived as overly cautious or too analytical, which may slow engagement growth by limiting viral hype and fast followers attracted to bold proclamations.","recommendation":"To grow his audience on X, Donald-axx should amplify real-time insights and quick actionable tips while maintaining his analytical rigor. Engaging in topical crypto debates, using concise threads, and hosting AMAs can boost visibility and build rapport with a broader crypto-savvy community.","roast":"Donald-axx is the kind of guy who can turn a meme coin into a PhD thesis—fun at parties if the party’s a crypto conference, less so if you just wanted to chat about what 'to the moon' really means. Diagnosis: brilliant analysis, social skills loading… please wait.","win":"Launching the NINETEEN CRYPTO project and successfully analyzing and predicting emerging meme coin trends on BSC, creating lasting community trust and delivering substantial returns even in a bearish market."},"created":1763410755113,"type":"the analyst","id":"donald_axx"},{"user":{"id":"147085494","name":"Joel Knee","description":"Common Sense 🇨🇦 | ISTP | AI enthusiast | $TSLA Investor | ₿itcoin Preacher\n我就想想 | 你说得对 | \n❤️ YNWA ❤️","followers_count":509,"friends_count":168,"statuses_count":4406,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1303159324966096898/QKtM24V8_normal.jpg","screen_name":"joel_knee","location":"Toronto, Ontario","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Joel Knee is a pragmatic AI enthusiast and Tesla investor who dives deep into stock market trends and tech debates, especially around autonomous driving. His communication style is detailed and data-driven, with a clear focus on informing and engaging his audience through thoughtful analysis. Joel bridges complex concepts with accessible language, often blending cultural references and humor to keep the conversation lively.","purpose":"Joel's life purpose is to demystify complex technologies and market movements, helping followers make smarter, fact-based financial and tech decisions while promoting a rational, skeptical mindset toward hype and speculation.","beliefs":"Joel values common sense, transparency, and intellectual honesty. He believes in leveraging data and critical thinking to evaluate innovations like AI and autonomous driving realistically. He supports decentralization and disruptive technologies like Bitcoin, viewing them as pillars of future economic freedom.","facts":"Joel tweets predominantly about Tesla and autonomous driving, often providing critical insights into market hype and false assumptions. He’s fluent in Mandarin and English, combining cultural nuances in his posts to reach a diverse audience.","strength":"Joel’s biggest strength lies in his thorough analytical approach to tech and finance, combined with clear, well-informed critiques that challenge popular narratives. His ability to explain complex subjects with clarity helps build trust among his followers.","weakness":"His heavy focus on detail and critical evaluation sometimes results in lengthy posts that might overwhelm casual readers. Additionally, his skepticism can occasionally come off as overly cautious or pessimistic to more optimistic followers.","roast":"Joel’s the kind of guy who has a spreadsheet for his spreadsheet’s spreadsheet – if analysis was an Olympic sport, he’d never actually leave the starting blocks because he’d still be reviewing the judging criteria. Get to the punchline, Joel, we promise we can handle it!","win":"Joel has successfully carved out a niche as a respected voice in Tesla and AI investment analysis, fostering an insightful community that values deep dives over hype. His detailed breakdowns on Tesla’s market outlook have sparked informed discussions and helped followers manage their portfolios more wisely.","recommendation":"To grow his audience on X, Joel should consider adding more punchy and digestible tweets or thread summaries that capture key insights quickly. Engaging more in real-time conversations and occasional multimedia content (like charts or short videos) could diversify engagement and help convert his detailed expertise into viral moments."},"created":1763409693404,"type":"the analyst","id":"joel_knee"},{"user":{"id":"1150937016361553920","name":"TJ (thaddeus jiang)","description":"程序员,不是独立开发者。","followers_count":10325,"friends_count":753,"statuses_count":9442,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1831660665687207936/NI1QhPjH_normal.jpg","screen_name":"thaddeusjiangzh","location":"日本 镰仓","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"life.thaddeusjiang.com","expanded_url":"https://life.thaddeusjiang.com","url":"https://t.co/rRcAKprVK1","indices":[0,23]}]}}},"details":{"type":"The Analyst","description":"TJ is a meticulous programmer who thrives on breaking down complex backend processes and sharing practical insights with his audience. With a strong focus on system design and coding best practices, he’s the go-to for deep technical analysis and thoughtful programming advice. Despite not being an independent developer, his expertise commands respect and engagement within the developer community.","purpose":"TJ’s life purpose is to demystify software architecture and backend development, empowering other developers through clarity, precision, and sharing of best practices. By offering detailed, data-driven insights, he helps elevate coding standards and fosters a community of continuously improving engineers.","beliefs":"TJ believes that understanding the right balance between business logic and technical implementation is key to efficient software solutions. He values structured knowledge, shared learning, and the power of industry best practices to cut through complexity and deliver robust systems.","facts":"Fun fact: TJ once created a detailed backend interview process flow that gained significant traction and feedback, showcasing his passion for process and structure in software engineering.","strength":"TJ’s strengths lie in his analytical mindset, thorough understanding of backend systems, and ability to translate complex technical topics into clear, actionable advice that resonates with developers at multiple levels.","weakness":"TJ’s intense focus on technical depth might sometimes make his content less accessible to casual followers or those newer to programming, potentially limiting audience growth outside core tech circles.","recommendation":"To grow his audience on X, TJ should blend his deep-dive technical content with more approachable explanations and occasional storytelling about his coding experiences. Engaging more in community conversations and sharing bite-sized tips could also help attract newcomers without losing his expert followers.","roast":"TJ’s tweets are so technically dense, you’d need a PhD and a strong coffee just to decode one — he’s basically the guy who turns even casual programmer chats into full-on backend thesis defenses!","win":"TJ’s biggest win is crafting a comprehensive and widely appreciated backend interview workflow that not only showcases his expertise but also serves as a valuable resource for the developer community."},"created":1763408725248,"type":"the analyst","id":"thaddeusjiangzh"},{"user":{"id":"1606113722967244801","name":"Cool X Media Group 互fo","description":"financial freedom individual freedom mostly spiritual freedom. 财务自由 人身自由 精神自由","followers_count":4810,"friends_count":7304,"statuses_count":35285,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1769633026462371840/o61ygMzV_normal.jpg","screen_name":"teslamillion","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Cool X Media Group 互fo is a high-frequency commentator focused on global current events with a sharp edge towards financial, individual, and spiritual freedom. Their tweets blend geopolitical analysis with candid and unfiltered observations, capturing complex topics in a direct manner. Despite the high volume of content, their insights often invite dialogue and reflection on pressing international issues.","purpose":"To equip their audience with critical perspectives on freedom and geopolitics, empowering individuals to understand and navigate the complexities of financial independence and global power dynamics.","beliefs":"They value financial freedom, personal autonomy, and spiritual liberation, believing these elements are key to a fulfilled and resilient life. They hold a pragmatic, sometimes blunt worldview that appreciates the importance of hard truths and transparency in understanding the world.","facts":"This profile has tweeted over 35,000 times, demonstrating an unparalleled commitment to sharing continuous updates and analysis, making them a relentless information source.","strength":"Exceptional consistency and volume in content production, combined with a bold, straightforward communication style that covers complex geopolitical topics with clarity and urgency.","weakness":"Overwhelming tweet frequency can cause follower fatigue, while the blunt delivery may sometimes alienate audience members seeking nuance or lighter content.","roast":"With a tweet count that rivals an over-caffeinated news ticker, they’re basically the human embodiment of ‘breaking news’—whether you want it or not. Sometimes you wonder if they sleep or just copy-paste live from a war room.","win":"Sustained engagement with a niche audience that values unfiltered geopolitical updates and freedom-oriented commentary, carving out a distinct voice in a crowded digital space.","recommendation":"To grow their audience on X, they should consider adding more interactive content such as polls or Q&As to foster community engagement, and balance their high-volume posting with curated, digestible threads that highlight key insights, making their expertise more accessible and shareable."},"created":1763408238569,"type":"the analyst","id":"teslamillion"},{"user":{"id":"931334795572871169","name":"0xNullPath","description":"低强度工作,高强度刷推,摸鱼主理人,不键政,不喜欢粉红 #老婆@whikylucky","followers_count":337,"friends_count":595,"statuses_count":705,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1967496933704192000/C3mfwb3K_normal.jpg","screen_name":"luyanfcp","location":"People's Republic of China","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"blog.0xnullpath.work","expanded_url":"https://blog.0xnullpath.work","url":"https://t.co/ouza8O4Abe","indices":[0,23]}]}}},"details":{"type":"The Analyst","description":"0xNullPath is a thoughtful tech enthusiast who balances a low-intensity work approach with high-intensity social media engagement. Known for insightful commentary on coding, GPU architecture, and real-world challenges, they combine technical depth with relatable personal reflections. They avoid political debates and prefer authenticity, making their content both engaging and trustworthy.","purpose":"To decode complex technical concepts and real-life dilemmas through logical analysis while fostering an honest and relatable online presence that informs and resonates with a tech-savvy community.","beliefs":"Values intellectual honesty, continuous learning, and balanced work-life dynamics. Believes in sharing knowledge pragmatically without unnecessary drama or political conflict, emphasizing genuine connection over superficial popularity.","facts":"Despite identifying as a 'low-intensity worker,' 0xNullPath maintains high activity on social media and volunteers at key industry events like PyCon, showing commitment without burnout.","strength":"Exceptional ability to dissect technical frameworks and real-world issues with clarity and practical insight, paired with consistent and authentic audience engagement.","weakness":"May struggle to broaden appeal beyond niche tech audiences due to avoidance of trending political topics and limited self-promotion—potentially narrowing follower growth.","recommendation":"To grow their audience on X, they should consider weaving in more storytelling elements about their personal and professional journey, engage in relevant non-political conversations, and use hashtag strategies related to tech trends and community events to increase visibility.","roast":"The only thing 0xNullPath analyzes harder than GPU benchmarks is why their tweets don't go viral—maybe if you analyzed some meme formats instead of just code, you'd see that retweet count go up!","win":"Successfully volunteered and contributed meaningfully at PyCon, connecting with industry peers and even meeting their idol, which highlights both networking skill and passion for tech community building."},"created":1763403723327,"type":"the analyst","id":"luyanfcp"},{"user":{"id":"1902568136450424832","name":"Aoi M","description":"ASD & ADHD / 外国語教育 ASD研究 撮影 音ゲー ツイ廃 男ママ タメ口 / Arcaea PTT 11.39 maimai 13,422 / CN EN JP KR / 🤍 @srkmanno @watebird14760 @wanzi0209 / sub @aoim31 無言フォロー失礼","followers_count":1234,"friends_count":1712,"statuses_count":4510,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1986249934363127808/1h7EEpy-_normal.jpg","screen_name":"aoim33","location":"西湖 杭州 浙江 & 徐汇 上海, 中国","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"t.me/aoinohanaya","expanded_url":"https://t.me/aoinohanaya","url":"https://t.co/8mjKrVdOKO","indices":[0,23]}]}}},"details":{"type":"The Analyst","description":"Aoi M is a multilingual, scholarly mind diving deep into the nuances of ASD and ADHD research while juggling the colorful worlds of rhythm games and photography. Their keen analytical skills and passion for language and education shine through thoughtful insights and collaborations with prestigious institutions. With a flair for interdisciplinary thinking, Aoi navigates both academic terrains and playful digital arenas with equal zest.","purpose":"To advance understanding and support for individuals with autism spectrum disorders through rigorous research and educational innovation, while fostering thoughtful dialogues in both academic and online communities.","beliefs":"Aoi values precision, interdisciplinary exploration, and culturally rich perspectives. They believe in the power of education tailored to neurodivergent needs and uphold a respect for local traditions balanced with thoughtful, evidence-based urban governance.","facts":"Despite loving photography, Aoi prefers using their iPhone 15 and Samsung S24 over traditional cameras, showcasing a modern, tech-savvy approach. Also, their deep involvement in ASD research is partnered with top universities like Peking University and Sun Yat-sen University.","strength":"Exceptional analytical thinking, linguistic dexterity across six languages, and the ability to connect academic theory with practical educational strategies for ASD learners.","weakness":"Struggles with social adaptability and oral expression, leading to occasional communication barriers and a hesitant public presence despite a prolific digital footprint.","recommendation":"Leverage your detailed research insights and multilingual skills by creating bite-sized, accessible threads on X sharing cutting-edge ASD findings and language tips. Engage followers with candid, personal reflections to humanize your expertise and grow a dedicated, interactive community.","roast":"Aoi talks about being a 'giant in thought but a midget in action'—which perfectly explains why your tweets outpace your actual social outings; you’ve basically mastered keyboard charisma but forgot to RSVP to real-life conversations!","win":"Successfully established collaborative research partnerships with leading academic institutions, significantly contributing to the understanding and educational strategies for ASD spectrum learners."},"created":1763403226938,"type":"the analyst","id":"aoim33"},{"user":{"id":"1744055410108080128","name":"prinz","description":"be not afraid of greatness","followers_count":6151,"friends_count":2156,"statuses_count":7063,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1874359541720092672/ciOMFG2x_normal.jpg","screen_name":"deredleritt3r","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Prinz is a sharp-minded observer and explainer of cutting-edge scientific and technological breakthroughs, especially in AI and computational biology. Their tweets are packed with insightful, data-driven revelations that capture the excitement of the moment while educating their audience. Prinz thrives on uncovering and sharing novel discoveries that push humanity forward.","purpose":"To illuminate the frontier of human knowledge by translating complex scientific milestones into engaging narratives that inspire curiosity and appreciation for the power of technology and research.","beliefs":"Prinz values rigorous experimentation, data validation, and transparency in communicating scientific progress. They believe that technology, particularly AI, holds transformative potential for solving some of the world’s toughest challenges and that knowledge-sharing accelerates innovation.","facts":"Prinz regularly highlights novel AI-generated scientific discoveries, including experimental validation in living cells, signaling their passion for blending biology and machine learning to pioneer new frontiers.","strength":"Exceptional ability to synthesize and translate complex scientific data into accessible, exciting content that resonates with a knowledgeable audience. Prinz also demonstrates a strong grasp of recent breakthroughs that positions them as a trusted source in their niche.","weakness":"Their intense focus on niche technical content may alienate casual followers who prefer lighter or broader engagement. Additionally, the high volume of tweets can sometimes overwhelm or dilute their key messages without strategic curation.","roast":"Prinz tweets so much cutting-edge science and tech that NASA’s considering adding them to the International Space Station’s reading list—because hey, who else can make rocket science sound like casual weekend reading?","win":"Successfully attracted hundreds of thousands of views and thousands of likes on tweets sharing groundbreaking advances in AI and science, establishing themselves as a go-to voice for big news on novel AI applications and validated scientific breakthroughs.","recommendation":"To grow their audience on X, Prinz should mix in more engaging anecdotes or simplified explanations alongside their technical content, use threaded tweets for deeper dives, and actively engage with followers and complementary influencers to foster a stronger community around AI and science innovation."},"created":1763402826374,"type":"the analyst","id":"deredleritt3r"},{"user":{"id":"34528592","name":"Joshua Done","description":"Science Fiction, Fantasy, Economics, Analysis. Watch my full videos on X / Twitter under the Highlights tab. #fantasy #puns #economics Anti-Authoritarian.","followers_count":30437,"friends_count":27601,"statuses_count":50958,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1195763111791218688/VFPpgI5E_normal.jpg","screen_name":"JoshuaDone","location":"Maple Valley","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Joshua Done is a powerhouse of insight blending science fiction, fantasy, and economics into compelling analysis. His tweets aren’t just posts; they’re intellectual adventures fueled by sharp wit and anti-authoritarian smarts. Dive into his Highlights tab for deep dives that make complex ideas approachable and entertaining.","purpose":"To dissect and decode the complexities of society, economics, and speculative worlds, empowering his audience to question norms and think critically about the systems that govern their lives.","beliefs":"Joshua values intellectual independence, cherishes skepticism towards authority, and believes that humor and critical thinking are powerful tools to challenge the status quo and uncover hidden truths.","facts":"Fun fact: Despite not having a publicly visible follower count, Joshua’s tweets regularly generate tens or even hundreds of thousands of views and likes, proving that quality and engagement trump simple numbers.","strength":"His analytical mind combined with a unique niche melds niche speculative fiction with hard economic insights, making his content both thought-provoking and intriguingly accessible. His prolific tweeting ensures he stays top-of-mind.","weakness":"With nearly 51,000 tweets and following 27,601 accounts, Joshua risks overwhelming his audience with volume, potentially diluting the impact of his sharpest content amid the flood.","recommendation":"To grow his audience further on X, Joshua should curate his extensive output by spotlighting themed threads or series, using more visual summaries to captivate scrollers, and strategically engaging with influencers to amplify reach beyond his current network.","roast":"Joshua tweets so much that his phone probably files for exhaustion—it's hard to tell if he’s brainstorming economics or live-tweeting a dystopian novel in progress. Somewhere, a fantasy character just asked for a break.","win":"Achieving viral traction with intellectually dense content like 'Schrodinger's Gold,' which amassed over 1.3 million views and 17,000 likes, illustrates his ability to blend niche interests with broad appeal effortlessly."},"created":1763401138666,"type":"the analyst","id":"joshuadone"},{"user":{"id":"2182142057","name":"Turki","description":"Crypto curious | OG vibe\n@KaitoAI | @PortaltoBitcoin","followers_count":980,"friends_count":790,"statuses_count":18194,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1954539803367936001/p8gCfd9Q_normal.jpg","screen_name":"TurkiiHD","location":"Lofoten","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Turki is a crypto-savvy thinker with a razor-sharp focus on emerging trends in decentralized science, real-world assets, and AI-powered finance. Their tweets dive deep into market mechanics and innovative projects, offering thoughtful insights that engage a knowledgeable community. They bring an OG vibe to crypto discourse, blending wisdom with cutting-edge updates.","purpose":"To decode and spotlight pioneering financial and scientific ecosystems on-chain, empowering their audience to understand complex decentralized concepts and position themselves advantageously ahead of market shifts.","beliefs":"Turki believes in transparency, reproducibility, and open governance as foundations for technological and financial progress. They trust that real value flows through sustainable mechanisms, not hype, and that community-backed innovation will fuel the next breakthroughs in finance and science.","facts":"Despite tweeting over 18,000 times, Turki keeps a curated, insightful lens on emerging projects like DeSci, real estate on-chain, and AI-NFT integrations, revealing patterns others might miss.","strength":"Exceptional ability to identify early-stage innovations and explain complex financial protocols with clarity, backed by data and trend analysis. Turki’s detailed commentary fosters informed debate among followers deeply interested in crypto’s future.","weakness":"Highly analytical and niche-focused, Turki may sometimes overcomplicate messages for casual followers, potentially limiting mass appeal or broader engagement beyond crypto insiders.","recommendation":"To grow on X, Turki should complement deep dives with more approachable threads and occasional simple summaries or infographics. Engaging in popular crypto conversations with hot takes can attract diverse followers while preserving their analytical brand.","roast":"Turki’s so deep in crypto analysis, they probably measure their steps in satoshis and dream about liquidity pools instead of sleep. They’re the person who turns ‘just a tweet’ into a thesis paper — grab your coffee, folks, this is going to be a long read.","win":"Successfully building a reputation as a reliable early indicator for powerful shifts in DeSci and Real World Asset protocols, becoming a trusted source for strategy and project tracking within the crypto community."},"created":1763399887063,"type":"the analyst","id":"turkiihd"},{"user":{"id":"1958147576492093440","name":"r eat","description":"fake peater + research chemical enthusiast","followers_count":1451,"friends_count":503,"statuses_count":709,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1964035806022025216/3zQ33TlH_normal.jpg","screen_name":"supplementmaxer","location":"","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"r eat is a research chemical enthusiast who dives deep into complex biochemical and physiological topics to share insights with their followers. Their tweets blend scientific curiosity with a passionate concern for health and well-being. They are driven by data and evidence, often highlighting lesser-known facts about hormones and their impact on human development.","purpose":"To educate and inform their audience on nuanced biochemical processes and their implications, fostering a better understanding of health through scientific inquiry and personal experimentation.","beliefs":"They value scientific rigor, evidence-based health interventions, and the power of knowledge to improve life outcomes. They hold a strong belief in early life influences on adult behavior and advocate for proactive health measures, especially concerning hormonal health during critical life stages.","facts":"Despite not having a large follower count, r eat has an impressive reach, as evidenced by a tweet garnering over half a million views and high engagement, showing their niche content resonates deeply with a dedicated audience.","strength":"Exceptional ability to present complex scientific information in an accessible and engaging manner, combined with a passionate, research-driven approach to health and hormones. Their focus on emerging and sometimes controversial topics sets them apart as a thought-provoker.","weakness":"Their niche focus and technical jargon may alienate a broader audience, and certain controversial statements (e.g., antisemitism link tweet) risk misunderstanding or backlash, which could hinder wider acceptance and growth.","roast":"r eat’s Twitter feed reads like a late-night chemistry lecture where caffeine addiction meets a conspiracy theorist’s notebook—always deep, often intense, and just unpredictable enough to keep you second-guessing if you’ve taken too many research chemicals today.","win":"Achieving viral engagement with a deeply educational tweet on childhood stress hormones, capturing over half a million views and nearly 11,000 likes, demonstrating their ability to combine passion and data successfully.","recommendation":"To grow their audience on X, r eat should balance their deep-dive scientific content with more relatable, simplified threads and engage more interactively with their followers to build community. Addressing controversial topics with nuanced, respectful conversations could help mitigate pushback and broaden their reach."},"created":1763399341406,"type":"the analyst","id":"supplementmaxer"},{"user":{"id":"1871534752681095168","name":"Muddy 😎","description":"born lucky. committed to bits. dad wannabe","followers_count":81,"friends_count":324,"statuses_count":1958,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1871547757003198465/s-WMF6_P_normal.jpg","screen_name":"mudlott","location":"Singapore","entities":{"description":{"urls":[]}}},"details":{"type":"The Analyst","description":"Muddy 😎 is a thoughtful and introspective user who combines sharp wit with a deep curiosity about human behavior and interactions. They enjoy dissecting social dynamics through both humor and insightful observations, often reflecting on identity and self-perception. Committed and a bit playful, Muddy’s online persona reveals a blend of intellect and everyday relatability.","purpose":"To explore and understand the complexities of human interactions and self-awareness, using humor and analysis to provoke thought and foster authentic conversations.","beliefs":"Muddy values introspection, honesty, and the nuanced understanding of self and others. They believe in personal growth through reflection and embrace the imperfections and quirks that make people unique. Authenticity and mental agility are key in how they engage with the world.","facts":"Fun fact: Muddy thinks every intense self-disclosure session should be followed by a fighting sport – a unique approach to emotional catharsis and conflict resolution!","strength":"Muddy's strength lies in their ability to unpack complex social and psychological concepts with humor and clarity, engaging followers in thoughtful dialogue while maintaining a relatable and approachable voice.","weakness":"Their analytically rich tweets might occasionally come off as too abstract or niche, limiting wider engagement and making some content feel a bit inaccessible for casual followers.","roast":"Muddy’s the type who's probably got an entire mental spreadsheet tracking everyone’s quirks and emotional states – but don’t ask them to stop analyzing long enough to just chill. They’re trying so hard to be ‘dad’ material, they might just overanalyze their way into a dad joke apocalypse.","win":"Consistently crafting tweets that balance humor, psychological insight, and genuine curiosity, Muddy has built a loyal niche audience that appreciates their unique voice and depth of content.","recommendation":"To grow their audience on X, Muddy should mix in more accessible and relatable tweets paired with their deep analysis—think bite-sized takeaways and punchy one-liners. Engaging in trending conversations with their distinct analytical twist could also broaden reach without compromising authenticity."},"created":1763399319357,"type":"the analyst","id":"mudlott"}],"activities":{"nreplies":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":4,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":9,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":4,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":19,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":3,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":9,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":27,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":5,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":76,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":8,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":46,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":1,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":13,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":52,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":44,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":141,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":1,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nbookmarks":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":7,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":415,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":14,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":326,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":368,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":884,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":1430,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":447,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":4308,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":5,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":637,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":20,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":1,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":1408,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":1,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":1188,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":491,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":5,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nretweets":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":1,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":44,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":1,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":14,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":20,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":50,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":70,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":44,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":145,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":24,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":20,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":0,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":4,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":0,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":74,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":0,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":78,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":40,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":0,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":0,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nlikes":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":76,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":310,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":19,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":286,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":236,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":595,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":1010,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":331,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":3084,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":159,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":630,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":1,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":37,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":7,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":1288,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":1,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":1504,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":978,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":1,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":38,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}],"nviews":[{"label":"2025-10-19","value":0,"startTime":1760745600000,"endTime":1760832000000,"tweets":[]},{"label":"2025-10-20","value":0,"startTime":1760832000000,"endTime":1760918400000,"tweets":[]},{"label":"2025-10-21","value":6629,"startTime":1760918400000,"endTime":1761004800000,"tweets":[{"bookmarked":false,"display_text_range":[0,146],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/eX3rpJNsUE","expanded_url":"https://x.com/athleticKoder/status/1980198277036527741/photo/1","id_str":"1980197872076271616","indices":[147,170],"media_key":"3_1980197872076271616","media_url_https":"https://pbs.twimg.com/media/G3sTeR4XMAAP66Y.jpg","type":"photo","url":"https://t.co/eX3rpJNsUE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1212,"w":1258,"resize":"fit"},"medium":{"h":1156,"w":1200,"resize":"fit"},"small":{"h":655,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1212,"width":1258,"focus_rects":[{"x":0,"y":0,"w":1258,"h":704},{"x":0,"y":0,"w":1212,"h":1212},{"x":0,"y":0,"w":1063,"h":1212},{"x":42,"y":0,"w":606,"h":1212},{"x":0,"y":0,"w":1258,"h":1212}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1980197872076271616"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1980198277036527741","view_count":6629,"bookmark_count":7,"created_at":1760951034000,"favorite_count":76,"quote_count":2,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980198277036527741","full_text":"> aws is down\n> vercel is down\n> substack is down\n> perplexity is down\n> canva is down\n> slack is down\n\nHAPPY DIWALI TO YOU ALL! https://t.co/eX3rpJNsUE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-22","value":14812,"startTime":1761004800000,"endTime":1761091200000,"tweets":[{"bookmarked":false,"display_text_range":[0,272],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980612676288991240","view_count":14812,"bookmark_count":415,"created_at":1761049834000,"favorite_count":310,"quote_count":0,"reply_count":9,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1980612676288991240","full_text":"Techniques I'd master to build great evals for AI apps.\n\n1. LLM-as-Judge\n2. Reference-based similarity metrics\n3. Pairwise comparison tournaments\n4. Human-in-the-loop evaluation\n5. Synthetic data generation\n6. Adversarial test case creation\n7. Multi-dimensional rubrics\n8. Regression testing on golden datasets\n9. A/B testing with live traffic\n10. Statistical significance testing\n11. Evaluation dataset curation & versioning\n12. Domain-specific benchmarks\n13. Red teaming & jailbreak testing\n14. Latency & cost monitoring\n15. User feedback loops\n16. Calibration & confidence scoring","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-23","value":1779,"startTime":1761091200000,"endTime":1761177600000,"tweets":[{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1980975235319931131","view_count":1779,"bookmark_count":14,"created_at":1761136275000,"favorite_count":19,"quote_count":0,"reply_count":4,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1980975235319931131","full_text":"I blocked her yesterday.\n\nFirst I felt she led me on.\nLater I realized she didn't feel the same way.\nAnd she friend-zoned me after months of late-night calls.\n\nBut now that I recall, she said something that hit different:\n\n\"You keep hoping I'll change my answer if you just keep showing more and better of you.\"\n\nI sat there shocked.\nSomething had to be done.\nI made the decision - I WAS DONE.\n\nBut then I realized...\n\nShe just perfectly described how LLMs fail at evals.\n\nTurns out I wasn't just being rejected. I was being overfitted.\n\nSee, in AI (and apparently dating), throwing more examples at the same evaluator doesn't mean you're actually improving. You're just optimizing for one specific judge.\n\nThis is the fatal flaw most teams make with LLM evals.\n\nHere's what actually happens:\n> You build a prompt.\n> Test it on GPT-4 as judge.\n> Iterate until GPT-4 gives you 95% approval.\n> Ship it.\n> Users hate it.\n\nYou didn't build a better model. You built a GPT-4 pleaser.\nJust like I didn't become a better person. I became a her-pleaser.\n\nThis is called Judge Overfitting.\n\nWhen you use the same eval repeatedly:\n- LLM learns the judge's quirks\n- Scores go up, quality doesn't\n- You mistake agreement for improvement\n\nIt's like studying the teacher instead of the subject. However this problem can be solved by diverse evaluation.\n\n1. Multiple Judges\n- Don't rely on one LLM-as-judge:\n- GPT-4 for reasoning\n- Claude for safety\n- Llama for cost-sensitive checks\n- Humans for edge cases\n\nDifferent judges = different blind spots exposed.\n\n2. Pairwise Comparisons\n\nStop asking \"Is this good?\"\nStart asking \"Which is better: A or B?\"\nHumans are terrible at absolute ratings.\n\nWe're excellent at relative judgments. Your eval should be too.\n\n3. Adversarial Testing\n\nDeliberately try to break your system:\n- Jailbreak attempts\n- Edge case injection\n- Boundary testing\n- Unexpected input formats\n\nIf you're not red-teaming, you're just hoping.\n\n4. Golden Dataset Versioning\n\n> Create a test set of real failures.\n> Version it like code.\n> Run regression tests on every change.\n\nNever fix the same bug twice.\n\n5. Human-in-the-Loop\n\n> Sample 5-10% of production outputs.\n> Get real human feedback.\n> Use it to calibrate your automated evals.\n\nHumans are expensive but they're ground truth.\n\n6. A/B Testing in Production\n\nThe only eval that matters is user behavior:\n- Click-through rates\n- Task completion\n- User satisfaction scores\n\nYour lab metrics mean nothing if users don't prefer it.\n\n7. Multi-Dimensional Rubrics\n\nDon't just score \"quality.\"\n\nScore:\n- Accuracy\n- Helpfulness\n- Safety\n- Tone\n- Conciseness\n- Format adherence\n\nOne number hides what's actually broken.\n\n8. Statistical Significance\n\nChanged your prompt and scores went from 87% to 89%?\n\n- With 50 examples? That's noise.\n- With 500 examples? Maybe signal.\n- With 5000 examples? Now we're talking.\n\nMost \"improvements\" are just variance.\n\n9. Synthetic Data Generation\n\nCan't get enough real examples? Generate edge cases:\n- Rare languages\n- Complex scenarios\n- Adversarial inputs\n- Domain-specific jargon\n\nStress test before users do.\n\n10. Calibration Scoring\n\nYour LLM says it's 90% confident?\nTrack: How often is it right when it says 90%?\n\nCalibrated confidence = you know when to trust it.\nUncalibrated = every answer sounds equally sure.\n\nThe Pattern:\n\nBad eval process:\n1. Build\n2. Test on one judge\n3. Optimize for that judge\n4. Ship\n5. Fail in production\n\nGood eval process:\n1. Build\n2. Test on diverse judges\n3. Red team it\n4. A/B test with real users\n5. Monitor and iterate\n\nMost teams spend 90% of time on prompts, 10% on evals.\nShould be reversed.\n\nA mediocre prompt with great evals beats a great prompt with mediocre evals every time.\n\nBecause you can't improve what you can't measure accurately.\nJust like in dating - getting rejected by one person doesn't mean you're broken.\n\nIt means you were optimizing for the wrong judge.\n\nOr run a pairwise comparison: \"Me vs that other guy - which is better?\"\n\nBut hey, at least my LLM evals are solid now.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-24","value":25443,"startTime":1761177600000,"endTime":1761264000000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/0a7mTbPJSE","expanded_url":"https://x.com/athleticKoder/status/1981219822357922140/photo/1","id_str":"1981219671257874432","indices":[38,61],"media_key":"3_1981219671257874432","media_url_https":"https://pbs.twimg.com/media/G360y0dXgAA5zIo.jpg","type":"photo","url":"https://t.co/0a7mTbPJSE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":966,"w":1600,"resize":"fit"},"medium":{"h":725,"w":1200,"resize":"fit"},"small":{"h":411,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":966,"width":1600,"focus_rects":[{"x":0,"y":70,"w":1600,"h":896},{"x":437,"y":0,"w":966,"h":966},{"x":497,"y":0,"w":847,"h":966},{"x":679,"y":0,"w":483,"h":966},{"x":0,"y":0,"w":1600,"h":966}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1981219671257874432"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981219822357922140","view_count":1630,"bookmark_count":0,"created_at":1761194589000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981219822357922140","full_text":"I just made my first internet dollar. https://t.co/0a7mTbPJSE","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,185],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981337422156710221","view_count":23008,"bookmark_count":326,"created_at":1761222627000,"favorite_count":268,"quote_count":3,"reply_count":8,"retweet_count":13,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You're in a ML Inference Engineer Interview at Meta, and the interviewer asks:\n\n\"Why do we need KV cache? Can't we just recompute attention for every new token?\"\n\nHere's how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430918623310","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"This is why serving systems use:\n- Paged attention (vLLM)\n- KV cache quantization\n- Continuous batching strategies","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430113284361","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337431707128228","view_count":225,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The techniques that separate amateurs from pros:\n\n> Multi-Query Attention (MQA): Share Keys/Values across heads (8x memory reduction)\n> Grouped-Query Attention (GQA): Middle ground between MQA and MHA\n> PagedAttention: Store KV cache in non-contiguous memory blocks\n> KV quantization: INT8 or INT4 KV cache (50-75% memory savings)\n\nModern serving systems use ALL of these.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337430918623310","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337432621535510","view_count":204,"bookmark_count":0,"created_at":1761222630000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The answer that gets you hired:\n\nKV cache eliminates quadratic recomputation by storing Keys and Values from previous tokens. It's a memory-compute tradeoff that makes generation 10-100x faster.\n\nWithout it, production LLM inference is impossible.\n\nThe challenge isn't whether to use it - it's how to manage memory efficiently.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337431707128228","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337423243022581","view_count":67,"bookmark_count":0,"created_at":1761222627000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Don't say: \"To make generation faster.\"\n\nToo shallow. The real answer is the quadratic recomputation problem.\n\nEvery token generation requires attention over ALL previous tokens.\n\nNo KV cache = O(n²) recomputation for every single token you generate.\n\nThat's the difference between 0.1s and 10s per token.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337422156710221","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337424170041779","view_count":59,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"You know why naive generation fails?\n\nWithout KV cache, generating a 100-token response:\n- Token 1: compute attention over 1000 context tokens\n- Token 2: recompute ALL 1001 tokens\n- Token 100: recompute ALL 1099 tokens\n\nTotal: ~55,000 redundant attention computations.\n\nYou're recalculating the same Keys and Values thousands of times.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337423243022581","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,83],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[60,83]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1981337425017209272","view_count":50,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"btw get this kinda content in your inbox daily - \n\nsub to - https://t.co/WiituAqKHr","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337424170041779","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337426489385004","view_count":46,"bookmark_count":0,"created_at":1761222628000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The computational waste is insane:\n\n> Without KV cache: 100 output tokens = 55,000 attention operations\n> With KV cache: 100 output tokens = 100 attention operations (550x reduction)\n\nMemory cost? ~2 bytes × layers × heads × hidden_dim per token\n\nFor Llama 70B: ~1.2GB for 1000 cached tokens.\n\nYou're trading memory for compute. Always worth it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337425017209272","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337427529630035","view_count":45,"bookmark_count":0,"created_at":1761222628000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The breakthrough everyone misses:\n\nDuring generation, the Keys and Values for past tokens NEVER CHANGE.\n\n> Without cache: Recompute K and V matrices for all previous tokens\n> With cache: Store computed K,V once, retrieve from memory\n\nOnly the Query for the NEW token needs computation.\n\nThis is why it's called autoregressive generation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337426489385004","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337428435632264","view_count":33,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"What actually gets cached:\n\nFor each transformer layer:\n- Keys: [batch, num_heads, seq_len, head_dim]\n- Values: [batch, num_heads, seq_len, head_dim]\n\nNOT cached: Queries (computed fresh each step)\n\nFor 32 layers × 32 heads × 128 dim = massive memory, but still cheaper than recomputing.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337427529630035","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337429287072218","view_count":24,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"The speed improvement is dramatic:\n\nWithout KV cache:\n- 100 token generation = ~30 seconds\n- Each token waits for full attention recomputation\n\nWith KV cache:\n- 100 token generation = ~3 seconds\n- Each token only computes new attention scores\n\nThat's 10x faster generation. Your users feel this immediately.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337428435632264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1981337430113284361","view_count":26,"bookmark_count":0,"created_at":1761222629000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981337422156710221","full_text":"Why you can't always cache everything:\n\n> Memory scaling: Linear with sequence length (1000 tokens = ~1GB for big models)\n> Batch size impact: Can't fit as many requests in GPU memory\n> Context length: 100k context = 100GB of KV cache","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1981337429287072218","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-25","value":13544,"startTime":1761264000000,"endTime":1761350400000,"tweets":[{"bookmarked":false,"display_text_range":[0,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1981631531551772945","quoted_status_permalink":{"url":"https://t.co/6K2edP4aTJ","expanded":"https://twitter.com/abhi1thakur/status/1981631531551772945","display":"x.com/abhi1thakur/st…"},"retweeted":false,"fact_check":null,"id":"1981632419007811951","view_count":3117,"bookmark_count":3,"created_at":1761292960000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1981632419007811951","full_text":"Super excited for this!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1981708263734280694","view_count":10311,"bookmark_count":365,"created_at":1761311043000,"favorite_count":228,"quote_count":0,"reply_count":2,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1981708263734280694","full_text":"Concepts I'd master to build production ML systems with PyTorch.\n\nBookmark this 👇\n\n1. TorchScript for model serialization \n2. torch.compile for 2x speedups \n3. Distributed training with DDP/FSDP \n4. Mixed precision with torch.amp \n5. Custom CUDA kernels with Triton \n6. Model quantization (PTQ & QAT) \n7. TorchServe for model deployment \n8. Lightning for cleaner training loops \n9. Dataset optimization with DataLoader workers \n10. Profiling with torch.profiler \n11. ONNX export for cross-platform inference \n12. Gradient accumulation for large batches \n13. Learning rate scheduling strategies \n14. Model checkpointing & recovery \n15. TorchVision/Audio/Text domain libraries \n16. Integration with HuggingFace ecosystem","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[20,24],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"33836629","name":"Andrej Karpathy","screen_name":"karpathy","indices":[0,9]},{"id_str":"1241357619400491008","name":"samsja","screen_name":"samsja19","indices":[10,19]}]},"favorited":false,"in_reply_to_screen_name":"karpathy","lang":"en","retweeted":false,"fact_check":null,"id":"1981760290078539812","view_count":116,"bookmark_count":0,"created_at":1761323447000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1981755493048840271","full_text":"@karpathy @samsja19 damn","in_reply_to_user_id_str":"33836629","in_reply_to_status_id_str":"1981758367996764616","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-26","value":0,"startTime":1761350400000,"endTime":1761436800000,"tweets":[]},{"label":"2025-10-27","value":0,"startTime":1761436800000,"endTime":1761523200000,"tweets":[]},{"label":"2025-10-28","value":32887,"startTime":1761523200000,"endTime":1761609600000,"tweets":[{"bookmarked":false,"display_text_range":[0,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1982794780221542478","view_count":30421,"bookmark_count":880,"created_at":1761570088000,"favorite_count":589,"quote_count":2,"reply_count":8,"retweet_count":50,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"Techniques I'd master to fine-tune LLMs in production. Bookmark this \n\n1. LoRA & QLoRA for parameter-efficient fine-tuning \n2. PEFT library for adapter methods \n3. Instruction tuning\n4. Dataset formatting (ChatML, Alpaca, ShareGPT) \n5. DeepSpeed ZeRO for memory optimization \n6. Flash Attention 2 for efficient training \n7. Gradient checkpointing for longer contexts \n8. BitsAndBytes for 4-bit/8-bit quantization \n9. RLHF & DPO for alignment \n10. Tokenizer training & vocabulary extension \n11. Evaluation metrics (perplexity, ROUGE, human eval) 12. Unsloth for 2x faster fine-tuning \n13. Multi-GPU strategies (FSDP, DDP)","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1982795352978620559","view_count":1015,"bookmark_count":0,"created_at":1761570225000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982793720979239348","full_text":"@ai_for_success Thought for 3 seconds.\n\nYou are absolutely right!","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1982793720979239348","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"337172753","name":"Nathan Covey","screen_name":"nathan_covey","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"nathan_covey","lang":"en","retweeted":false,"fact_check":null,"id":"1982804744646140186","view_count":257,"bookmark_count":0,"created_at":1761572464000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982804579373678918","full_text":"@nathan_covey harmony ?","in_reply_to_user_id_str":"337172753","in_reply_to_status_id_str":"1982804579373678918","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,106],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[83,106]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1982795148770541664","view_count":1194,"bookmark_count":4,"created_at":1761570176000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982794780221542478","full_text":"subscribe to my newsletter to get this kinda content in your email inbox, daily -\n\nhttps://t.co/ZZMuGnenjh","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1982794780221542478","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-29","value":194313,"startTime":1761609600000,"endTime":1761696000000,"tweets":[{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983151636370293218","view_count":107832,"bookmark_count":1350,"created_at":1761655169000,"favorite_count":862,"quote_count":3,"reply_count":11,"retweet_count":64,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"You're in a ML Engineer interview at Perplexity , and they ask:\n\n\"Your RAG system retrieves at 80% accuracy but only answers correctly 50% of the time. What's wrong?\"\n\nHere's is how you answer:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,79],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"962512297083244544","name":"Ishan Goswami","screen_name":"TheIshanGoswami","indices":[0,16]},{"id_str":"1423733191005724672","name":"Exa","screen_name":"ExaAILabs","indices":[17,27]}]},"favorited":false,"in_reply_to_screen_name":"TheIshanGoswami","lang":"en","retweeted":false,"fact_check":null,"id":"1983025036043727231","view_count":668,"bookmark_count":0,"created_at":1761624986000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982948602495545461","full_text":"@TheIshanGoswami @ExaAILabs applied when I was on market.\nnever heard back lol.","in_reply_to_user_id_str":"962512297083244544","in_reply_to_status_id_str":"1982948602495545461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[73,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1764814622245412864","name":"Inference","screen_name":"inference_net","indices":[0,14]},{"id_str":"2573166028","name":"Ibrahim Ahmed","screen_name":"atbeme","indices":[15,22]},{"id_str":"1452638090405744643","name":"bonham","screen_name":"bonham_sol","indices":[23,34]},{"id_str":"1598433551422083074","name":"Amar Singh","screen_name":"AmarSVS","indices":[35,43]},{"id_str":"1481089049490333702","name":"Francesco Virga","screen_name":"francescodvirga","indices":[44,60]},{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[61,72]},{"id_str":"1298023616253005826","name":"Michael","screen_name":"michael_chomsky","indices":[73,89]}]},"favorited":false,"in_reply_to_screen_name":"inference_net","lang":"ht","retweeted":false,"fact_check":null,"id":"1983026278447083551","view_count":283,"bookmark_count":0,"created_at":1761625282000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982997753937756451","full_text":"@inference_net @atbeme @bonham_sol @AmarSVS @francescodvirga @0xSamHogan @michael_chomsky THE DEVREL","in_reply_to_user_id_str":"1764814622245412864","in_reply_to_status_id_str":"1982997753937756451","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[47,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[0,8]},{"id_str":"44196397","name":"Elon Musk","screen_name":"elonmusk","indices":[9,18]},{"id_str":"1092693586263457792","name":"Greg Yang","screen_name":"TheGregYang","indices":[19,31]},{"id_str":"1494136096371863552","name":"Francesca LaBianca","screen_name":"francesca_lab","indices":[32,46]}]},"favorited":false,"in_reply_to_screen_name":"matanSF","lang":"und","retweeted":false,"fact_check":null,"id":"1983025960845730161","view_count":142,"bookmark_count":0,"created_at":1761625206000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982958350947234014","full_text":"@matanSF @elonmusk @TheGregYang @francesca_lab yay","in_reply_to_user_id_str":"1353833967221313539","in_reply_to_status_id_str":"1982958350947234014","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1141052916570214400","name":"Philipp Schmid","screen_name":"_philschmid","indices":[0,12]},{"id_str":"17424053","name":"fitbit","screen_name":"fitbit","indices":[13,20]}]},"favorited":false,"in_reply_to_screen_name":"_philschmid","lang":"en","retweeted":false,"fact_check":null,"id":"1983127979329822934","view_count":751,"bookmark_count":0,"created_at":1761649529000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983113197386113382","full_text":"@_philschmid @fitbit i moved on from fitbit an year ago. love the new UI tho.","in_reply_to_user_id_str":"1141052916570214400","in_reply_to_status_id_str":"1983113197386113382","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,30],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"587016589","name":"Sam Hogan 🇺🇸","screen_name":"0xSamHogan","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"0xSamHogan","lang":"en","retweeted":false,"fact_check":null,"id":"1983026441479422202","view_count":180,"bookmark_count":0,"created_at":1761625321000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1982909039790174635","full_text":"@0xSamHogan this is super cool","in_reply_to_user_id_str":"587016589","in_reply_to_status_id_str":"1982909039790174635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,187],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151639977714090","view_count":12488,"bookmark_count":6,"created_at":1761655170000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Don't say: \"Use a reranker\"\n\nToo generic.\n\nThe real answer starts with understanding what makes RAG reranking different from web search reranking.\n\nSpoiler: It's not just about relevance.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151636370293218","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151643408454082","view_count":12897,"bookmark_count":19,"created_at":1761655171000,"favorite_count":22,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's what most people miss about RAG reranking:\n\nWeb search reranking: Optimize for \"which doc will the human click?\" → User reads results → picks best one\n\nRAG reranking: Optimize for \"which doc helps the LLM generate the correct answer?\" → LLM reads results → generates answer\n\nTraditional rerankers are trained on click data (MS MARCO, Natural Questions). But your LLM doesn't click - it comprehends.\n\nThis fundamental difference changes EVERYTHING about how you build it.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151639977714090","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,246],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151646491455682","view_count":11608,"bookmark_count":3,"created_at":1761655172000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The fundamental problem with off-the-shelf rerankers:\n\nThey're trained on Natural Questions → Optimized for \"which doc would a human click?\"\n\nBut RAG needs: \"Which doc helps the LLM generate the correct answer?\"\n\nThese are NOT the same objective.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151643408454082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151649225863371","view_count":9496,"bookmark_count":5,"created_at":1761655173000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's the brutal truth about position bias in RAG:\n\nYour LLM with 10 docs at 80% precision = 50% accuracy Same LLM with 3 docs at 95% precision = 85% accuracy\n\nWhy? \n\nLLMs suffer from \"lost in the middle\" - they ignore positions 4-10.\n\nYour reranker's top-3 matters MORE than your retriever's top-100.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151646491455682","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151651969024335","view_count":7390,"bookmark_count":8,"created_at":1761655173000,"favorite_count":10,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The first principle of RAG reranking:\n\nAnswer containment > Semantic similarity\n\"How to prevent heart attacks?\"\n\nDoc A: \"Heart attacks kill millions annually\" (high similarity ❌)\n\nDoc B: \"Prevent heart attacks through diet and exercise\" (lower similarity ✅)\n\nTraditional rerankers pick A. RAG rerankers need B.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151649225863371","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,276],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151654602989653","view_count":5966,"bookmark_count":7,"created_at":1761655174000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The hidden killer in enterprise RAG: Conflicting information across sources.\n\n- Marketing materials vs product docs\n- Q2 notes vs Q1 notes\n- Google Drive vs Microsoft Office docs\n\nInstruction-following rerankers let you specify priority: \n\n\"Prioritize internal sales documents over market analysis. Weight recent documents higher.\"\n\nThis is impossible with traditional rerankers.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151651969024335","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151657887072661","view_count":5003,"bookmark_count":7,"created_at":1761655175000,"favorite_count":9,"quote_count":0,"reply_count":2,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"So how do you actually build this?\n\nStep 1: Generate RAG-specific training data\n\nDon't use MS MARCO. Create your own:\n\n- Run your RAG pipeline on real queries\n- Collect (query, retrieved_docs, LLM_answer, ground_truth)\n-Label which docs the LLM SHOULD have used\nThis is your gold dataset.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151654602989653","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151660625973358","view_count":4049,"bookmark_count":3,"created_at":1761655175000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 2: The hard negative problem\n\nBad negative: Random doc about cats (for query about dogs) Good negative: \"Dogs are popular pets\" (for query \"How to train dogs?\")\n\nYour reranker needs to learn:\n\n> Topically relevant ≠ Answer-containing\n> Statistics ≠ Instructions\n> Definitions ≠ Procedures","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151657887072661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151663092203612","view_count":3445,"bookmark_count":1,"created_at":1761655176000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Step 3: Optimize for YOUR LLM's behavior\n\nGPT-4o vs Claude vs Llama - they use context differently.\n\nRun this experiment:\n\n>Same docs, different orders\n> Measure answer quality per LLM\n\nYour reranker should optimize for YOUR LLM's position bias.\n\nThis can swing accuracy by 15-20%.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151660625973358","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1765409728983703553","name":"getContext","screen_name":"Contextual_AI","indices":[35,49]},{"id_str":"1765409728983703553","name":"getContext","screen_name":"contextual_ai","indices":[35,49]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151665571082725","view_count":2845,"bookmark_count":3,"created_at":1761655176000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"Here's where it gets interesting:\n\n@contextual_ai solved this exact problem.\n\nThey built a reranker specifically for RAG that:\n\n✅ Understands answer containment vs semantic similarity\n✅ Enables metadata-based reranking (recency, source, document type)\n✅ Uses synthetic data generation for instruction-following capability\n✅ Runs in <100ms on 100 docs\n✅ Purpose-built for production RAG pipelines","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151663092203612","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151668536410381","view_count":2556,"bookmark_count":2,"created_at":1761655177000,"favorite_count":10,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The cost equation everyone forgets:\n\nBad reranking = Send 100 docs to LLM\n> 100 docs × 500 tokens = 50K tokens\n> GPT-4o (Standard): $0.125 per query (at $2.50 per 1M input tokens)\n>1M queries: $125K/month\n\nGood reranking = Send 15 docs to LLM\n> 15 docs × 500 tokens = 7.5K tokens\n> GPT-4o (Standard): $0.01875 per query (at $2.50 per 1M input tokens)\n> 1M queries: $18.75K/month\n\nReranking SAVES $106.25K/month in this scenario.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151665571082725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1983151670969131511","view_count":2796,"bookmark_count":6,"created_at":1761655178000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"The production checklist for RAG reranking:\n\n✅ Train on answer containment, not clicks \n✅ Use domain-specific hard negatives \n✅ Optimize for YOUR LLM's behavior \n✅ Measure end-to-end answer quality, not just NDCG ✅ Keep latency <100ms","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151668536410381","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,222],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com/","url":"https://t.co/XeFxz1AXP6","indices":[181,204]}],"user_mentions":[{"id_str":"1635126531185057793","name":"Contextual AI","screen_name":"ContextualAI","indices":[56,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1983151673834164648","view_count":2519,"bookmark_count":10,"created_at":1761655178000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"That's it for today y'all.\n\nThis thread is sponsored by @ContextualAI.\n\nI post such informational ML content daily. To get it in your inbox consider subscribing to my newsletter -\n\nhttps://t.co/XeFxz1AXP6\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1983151670969131511","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1884617498194288640","name":"karthikponna","screen_name":"karthikponna19","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"karthikponna19","lang":"en","retweeted":false,"fact_check":null,"id":"1983160692980257246","view_count":1399,"bookmark_count":0,"created_at":1761657329000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1983151636370293218","full_text":"@karthikponna19 glad you liked it!","in_reply_to_user_id_str":"1884617498194288640","in_reply_to_status_id_str":"1983154290983350762","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-10-30","value":0,"startTime":1761696000000,"endTime":1761782400000,"tweets":[]},{"label":"2025-10-31","value":17078,"startTime":1761782400000,"endTime":1761868800000,"tweets":[{"bookmarked":false,"display_text_range":[0,297],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983871150200648147","view_count":17078,"bookmark_count":447,"created_at":1761826715000,"favorite_count":331,"quote_count":1,"reply_count":5,"retweet_count":44,"user_id_str":"1229293267625234432","conversation_id_str":"1983871150200648147","full_text":"Techniques I'd master to learn Reinforcement Learning. \n\nBookmark this 👇\n\n1. Markov Decision Processes (MDPs) & Bellman equations\n2. Value iteration & policy iteration algorithms\n3. Q-learning & Deep Q-Networks (DQN)\n4. Experience replay & target networks\n5. Policy gradients & Reinforce algorithm\n6. Actor-Critic methods (A2C, A3C)\n7. Proximal Policy Optimization (PPO)\n8. Trust Region Policy Optimization (TRPO)\n9. Soft Actor-Critic (SAC) for continuous control\n10. Twin Delayed DDPG (TD3) algorithm\n11. Exploration strategies (epsilon-greedy, UCB, entropy)\n12. Reward shaping & discount factor selection\n13. Multi-armed bandits & contextual bandits\n14. Monte Carlo methods & temporal difference learning\n15. Function approximation & neural network policies\n16. OpenAI Gym & custom environment design\n17. Stable Baselines3 & RLlib frameworks\n18. Model-based RL (Dyna-Q, World Models)\n19. Imitation learning & behavioral cloning\n20. Multi-agent RL & game theory basics","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-01","value":0,"startTime":1761868800000,"endTime":1761955200000,"tweets":[]},{"label":"2025-11-02","value":0,"startTime":1761955200000,"endTime":1762041600000,"tweets":[]},{"label":"2025-11-03","value":0,"startTime":1762041600000,"endTime":1762128000000,"tweets":[]},{"label":"2025-11-04","value":599362,"startTime":1762128000000,"endTime":1762214400000,"tweets":[{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1985323628519395635","view_count":364240,"bookmark_count":3982,"created_at":1762173013000,"favorite_count":2444,"quote_count":13,"reply_count":54,"retweet_count":139,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"Just use OpenAI API\"\n\nUntil you need:\n- Custom fine-tuned models\n- <50ms p99 latency \n- $0.001/1K tokens (not $1.25/1K input)\n\nThen you build your own inference platform.\n\nHere's how to do that:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[22,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"533080721","name":"Arjun Jain | Fast Code AI","screen_name":"ArjunFastCode","indices":[0,14]},{"id_str":"19380829","name":"Yahoo","screen_name":"Yahoo","indices":[15,21]}]},"favorited":false,"in_reply_to_screen_name":"ArjunFastCode","lang":"und","retweeted":false,"fact_check":null,"id":"1985204562526167083","view_count":1627,"bookmark_count":0,"created_at":1762144625000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985148326875128037","full_text":"@ArjunFastCode @Yahoo wow","in_reply_to_user_id_str":"533080721","in_reply_to_status_id_str":"1985148326875128037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[25,61],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1235989037938167808","name":"Thijs","screen_name":"cdngdev","indices":[0,8]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[9,24]},{"id_str":"284333988","name":"Logan Kilpatrick","screen_name":"OfficialLoganK","indices":[25,40]}]},"favorited":false,"in_reply_to_screen_name":"cdngdev","lang":"en","retweeted":false,"fact_check":null,"id":"1985317101310242968","view_count":529,"bookmark_count":0,"created_at":1762171457000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985209684828328025","full_text":"@cdngdev @OfficialLoganK @OfficialLoganK send one my way too🙈","in_reply_to_user_id_str":"1235989037938167808","in_reply_to_status_id_str":"1985209684828328025","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,155],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323629307920638","view_count":24188,"bookmark_count":19,"created_at":1762173013000,"favorite_count":72,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Most engineers think \"build your own\" means:\n- Rent some GPUs\n- Load model with vLLM\n- Wrap it in FastAPI\n- Ship it\n\nThe complexity hits you around week 2.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323628519395635","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,254],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323630121632117","view_count":22916,"bookmark_count":3,"created_at":1762173013000,"favorite_count":55,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Remember: You're not building a system to serve one model to one user. \n\nYou're building a system that handles HUNDREDS of concurrent requests, across multiple models, with wildly different latency requirements.\n\nThat's a fundamentally different problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323629307920638","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323631115637136","view_count":22687,"bookmark_count":59,"created_at":1762173013000,"favorite_count":89,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"What you actually need: \n> A request router that understands model capabilities.\n> A dynamic batcher that groups requests without killing latency. \n> A KV cache manager that doesn't OOM your GPUs. \n> A model instance pool that handles traffic spikes.\n\nAnd that's just the core components.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323630121632117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323632072048946","view_count":21161,"bookmark_count":25,"created_at":1762173014000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Your <50ms p99 requirement breaks down as:\n\n- Network overhead: 10-15ms (you can't fix this)\n- Queueing delay: 5-20ms (if you batch wrong, this explodes)\n- First token latency: 20-40ms (model dependent)\n- Per-token generation: 10-50ms (grows with context length)\n\nYou have maybe 5ms of slack. This is why \"just throw H100s at it\" fails.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323631115637136","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,100],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[51,74]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/8cZqbXdjx0","expanded_url":"https://x.com/athleticKoder/status/1985323632915104127/photo/1","id_str":"1985323625352724480","indices":[101,124],"media_key":"3_1985323625352724480","media_url_https":"https://pbs.twimg.com/media/G41JUY1XAAAjjaq.jpg","type":"photo","url":"https://t.co/8cZqbXdjx0","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1985323625352724480"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985323632915104127","view_count":18749,"bookmark_count":32,"created_at":1762173014000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"btw get this kinda content in your inbox daily - \n\nhttps://t.co/WiituAqKHr\n\nnow back to the thread - https://t.co/8cZqbXdjx0","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632072048946","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323634055885082","view_count":16316,"bookmark_count":32,"created_at":1762173014000,"favorite_count":46,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The first principle of inference platforms:\n\nContinuous batching ≠ Static batching\n\nStatic batching waits for 8 requests, then processes them together. Continuous batching processes 8 requests and adds request #9 mid-generation.\n\nvLLM does this. TensorRT-LLM does this. Your FastAPI wrapper doesn't.\n\nThis single difference is 3-5x throughput.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323632915104127","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323635091919170","view_count":14501,"bookmark_count":13,"created_at":1762173014000,"favorite_count":28,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"KV cache memory makes things difficult.\n\nLlama 70B at 4K context needs 560GB of KV cache for just 32 concurrent requests. Your H100 has 80GB total.\n\nPagedAttention (from vLLM) solved this by treating KV cache like virtual memory. Manual implementation? You'll OOM before you understand why.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323634055885082","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,281],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323636069204297","view_count":12662,"bookmark_count":8,"created_at":1762173015000,"favorite_count":22,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"\"We have 20 fine-tuned models for different tasks\"\n\nNow your platform needs model routing based on user intent. \n\nDynamic loading and unloading so you don't keep 20 models in memory. \n\nShared KV cache across similar base models. \n\nLoRA adapter swapping in <100ms.\n\nThis is where 90% of DIY inference platforms die.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323635091919170","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323637138739674","view_count":10683,"bookmark_count":19,"created_at":1762173015000,"favorite_count":37,"quote_count":1,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Use OpenAI API when you're under 100K requests/month, using standard models, can tolerate 500ms+ latency, and cost per request is 10x higher than raw compute.\n\nBuild your own when you have custom models, doing 500K+ requests/month, need sub-100ms p99, or when cost optimization actually matters.\n\nThe break-even is usually around $5-10K/month in API spend.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323636069204297","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323638128513350","view_count":9944,"bookmark_count":11,"created_at":1762173015000,"favorite_count":30,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Let's do the actual math:\n\nOpenAI GPT-5 pricing: $1.25 per 1M input tokens, $10 per 1M output tokens\n\n1M requests × 1K input tokens × 500 output tokens = $1,250 input + $5,000 output = $6,250\n\nYour H100 inference platform at $2/hour: 1M requests at 100 req/sec = 2.8 hours = $5.60 in compute.\n\nBut you forgot engineering time ($50K to build), maintenance ($10K/month), and the 6 months to break even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323637138739674","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323639051297214","view_count":8777,"bookmark_count":23,"created_at":1762173015000,"favorite_count":29,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Production inference platforms have four layers:\n\nRequest handling (load balancer, rate limiter, queue). Orchestration (model router, dynamic batcher, priority scheduler). Inference engine (vLLM/TRT-LLM, KV cache manager, multi-GPU coordinator). Observability (per-component latency, GPU utilization, cost per token).\n\nMost engineers build layer 1 and 3, then wonder why production breaks.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323638128513350","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,283],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640020230613","view_count":8115,"bookmark_count":9,"created_at":1762173015000,"favorite_count":20,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The mistakes that kill DIY inference platforms:\n\n> Ignoring queueing theory. Your GPU isn't the bottleneck - your queue is. Requests pile up faster than you can batch them.\n\n> Optimizing throughput over latency. Sure you hit 1000 tokens/sec in aggregate, but user experience is terrible because individual requests wait.\n\n> Not measuring per-token latency. Your p99 looks fine until you realize tokens 50-100 are taking 200ms each.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323639051297214","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323640976486896","view_count":7521,"bookmark_count":8,"created_at":1762173016000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"Here's where it gets interesting: speculative decoding, prefix caching, and continuous batching work AGAINST each other.\n\nSpeculative decoding wants more compute upfront for faster generation. Prefix caching wants more memory to reuse common contexts. Continuous batching wants shorter sequences for better throughput.\n\nOptimize one, degrade the others. This tradeoff doesn't exist when you're just calling OpenAI's API.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640020230613","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,291],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323641895063811","view_count":8108,"bookmark_count":33,"created_at":1762173016000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"The production checklist for inference platforms:\n\n> Use continuous batching (vLLM or TensorRT-LLM, not raw PyTorch). \n> Implement request prioritization from day one. \n> Monitor per-component latency, not just end-to-end. \n> Auto-scale based on queue depth, not CPU. \n> Track both $/token AND tokens/sec. \n\nHave model hot-swapping ready. Plan for 10x traffic spikes.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323640976486896","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,238],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1985323642838741009","view_count":7423,"bookmark_count":31,"created_at":1762173016000,"favorite_count":51,"quote_count":1,"reply_count":5,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"That's it for today.\n\nBuilding an inference platform is a 6-month engineering project with hidden costs everywhere.\n\nBut when you hit scale? It pays for itself in weeks.\n\nThe key is knowing when to build vs when to rent.\n\nSee ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1985323641895063811","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,77],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985379598935433542","view_count":2,"bookmark_count":0,"created_at":1762186357000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b very misleading but good tweet\ne2b is soliddd btw","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985379370207523200","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,36],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"in","retweeted":false,"fact_check":null,"id":"1985378427407622410","view_count":676,"bookmark_count":0,"created_at":1762186078000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b profit??","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985369879516475496","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,34],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985388845118951446","view_count":8,"bookmark_count":0,"created_at":1762188562000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b yessir","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985381437244420468","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,89],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985390430482043238","view_count":3,"bookmark_count":0,"created_at":1762188940000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b e2b hiring devrel? \na friend is looking. mind if i refer him?","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985389323303448813","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[11,14],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1737595658909908992","name":"Ravi | ML Engineer","screen_name":"RaviRaiML","indices":[0,10]}]},"favorited":false,"in_reply_to_screen_name":"RaviRaiML","lang":"und","retweeted":false,"fact_check":null,"id":"1985372235780194629","view_count":2207,"bookmark_count":0,"created_at":1762184602000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@RaviRaiML Lol","in_reply_to_user_id_str":"1737595658909908992","in_reply_to_status_id_str":"1985348872735007018","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1483878228884434953","name":"Solution Dev.ai","screen_name":"Paulfruitful_","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"Paulfruitful_","lang":"en","retweeted":false,"fact_check":null,"id":"1985372142058512440","view_count":2260,"bookmark_count":0,"created_at":1762184579000,"favorite_count":4,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Paulfruitful_ This🤣🤣🤣","in_reply_to_user_id_str":"1483878228884434953","in_reply_to_status_id_str":"1985356366496604373","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,20],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1794338878364536832","name":"Andrea Villa","screen_name":"curlyhacks1","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"curlyhacks1","lang":"en","retweeted":false,"fact_check":null,"id":"1985371986399543315","view_count":2734,"bookmark_count":0,"created_at":1762184542000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@curlyhacks1 EXACTLY","in_reply_to_user_id_str":"1794338878364536832","in_reply_to_status_id_str":"1985365060005339301","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,100],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1908764549483761664","name":"KandaBhaji","screen_name":"KandaBhaji_x","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"KandaBhaji_x","lang":"en","retweeted":false,"fact_check":null,"id":"1985372467960099018","view_count":953,"bookmark_count":0,"created_at":1762184657000,"favorite_count":6,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@KandaBhaji_x If your app goes viral on a free plan be ready to sell your kidney to pay openai bills","in_reply_to_user_id_str":"1908764549483761664","in_reply_to_status_id_str":"1985359579484676338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,47],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"258300417","name":"Vasek Mlejnsky","screen_name":"mlejva","indices":[0,7]},{"id_str":"1545025796003139590","name":"Bun","screen_name":"bunjavascript","indices":[8,22]},{"id_str":"1642834485673619457","name":"E2B","screen_name":"e2b","indices":[23,27]}]},"favorited":false,"in_reply_to_screen_name":"mlejva","lang":"en","retweeted":false,"fact_check":null,"id":"1985394385354145910","view_count":8,"bookmark_count":0,"created_at":1762189882000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985369879516475496","full_text":"@mlejva @bunjavascript @e2b sure i will DM you!","in_reply_to_user_id_str":"258300417","in_reply_to_status_id_str":"1985390666390942079","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[12,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"960273380","name":"Tyler Shukert","screen_name":"dshukertjr","indices":[0,11]}]},"favorited":false,"in_reply_to_screen_name":"dshukertjr","lang":"tl","retweeted":false,"fact_check":null,"id":"1985378595032977859","view_count":70,"bookmark_count":0,"created_at":1762186118000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985331627522687365","full_text":"@dshukertjr supabase 🫶🏻","in_reply_to_user_id_str":"960273380","in_reply_to_status_id_str":"1985331627522687365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2455473522","name":"Smakosh","screen_name":"smakosh","indices":[0,8]},{"id_str":"1933570303709286401","name":"LLM Gateway","screen_name":"llmgateway","indices":[9,20]}]},"favorited":false,"in_reply_to_screen_name":"smakosh","lang":"en","retweeted":false,"fact_check":null,"id":"1985421969186066899","view_count":1150,"bookmark_count":0,"created_at":1762196459000,"favorite_count":5,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@smakosh @llmgateway dude, gateway doesn't solve this problem.","in_reply_to_user_id_str":"2455473522","in_reply_to_status_id_str":"1985403293040845245","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,13],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1945670299950653440","name":"nayra","screen_name":"yanarsw","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"yanarsw","lang":"en","retweeted":false,"fact_check":null,"id":"1985424250443100178","view_count":913,"bookmark_count":0,"created_at":1762197003000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@yanarsw good","in_reply_to_user_id_str":"1945670299950653440","in_reply_to_status_id_str":"1985420090980929563","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,85],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"189057736","name":"FrodoMercury","screen_name":"Frodo_Mercury","indices":[0,14]},{"id_str":"25191683","name":"Robert Nishihara","screen_name":"robertnishihara","indices":[35,51]}]},"favorited":false,"in_reply_to_screen_name":"Frodo_Mercury","lang":"en","retweeted":false,"fact_check":null,"id":"1985424427627004213","view_count":506,"bookmark_count":0,"created_at":1762197045000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@Frodo_Mercury nope didn't try ray\n@robertnishihara is best person to answer this imo","in_reply_to_user_id_str":"189057736","in_reply_to_status_id_str":"1985388938865811699","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[17,25],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551262560648712192","name":"Karan Jagtiani","screen_name":"karanjagtiani04","indices":[0,16]}]},"favorited":false,"in_reply_to_screen_name":"karanjagtiani04","lang":"tl","retweeted":false,"fact_check":null,"id":"1985413272271798659","view_count":517,"bookmark_count":0,"created_at":1762194385000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@karanjagtiani04 ai reply","in_reply_to_user_id_str":"1551262560648712192","in_reply_to_status_id_str":"1985383575152099666","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[32,37],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1785063778666651649","name":"Alvaro Mysterio","screen_name":"AlvaroMysterio","indices":[0,15]},{"id_str":"1853412668696408064","name":"Nebius AI Studio","screen_name":"nebiusaistudio","indices":[16,31]}]},"favorited":false,"in_reply_to_screen_name":"AlvaroMysterio","lang":"en","retweeted":false,"fact_check":null,"id":"1985424270260908310","view_count":1225,"bookmark_count":0,"created_at":1762197008000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@AlvaroMysterio @nebiusaistudio solid","in_reply_to_user_id_str":"1785063778666651649","in_reply_to_status_id_str":"1985403822814920755","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,71],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1985461042286112800","view_count":3009,"bookmark_count":1,"created_at":1762205775000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@_willfalcon plenty other alternatives but litserve is a great one too!","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1985377313886794117","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,97],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"953183202654478337","name":"Mert Ünsal","screen_name":"mertunsal2020","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"mertunsal2020","lang":"en","retweeted":false,"fact_check":null,"id":"1985461550488949194","view_count":2192,"bookmark_count":0,"created_at":1762205896000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@mertunsal2020 there are plenty of consultancies. and i think companies want to hire full timers.","in_reply_to_user_id_str":"953183202654478337","in_reply_to_status_id_str":"1985400795668594806","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,10],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1777115021807661056","name":"luffy","screen_name":"0xluffy","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"0xluffy","lang":"qme","retweeted":false,"fact_check":null,"id":"1985260340398145700","view_count":782,"bookmark_count":0,"created_at":1762157924000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985005756635377764","full_text":"@0xluffy 🤣","in_reply_to_user_id_str":"1777115021807661056","in_reply_to_status_id_str":"1985005756635377764","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-05","value":23287,"startTime":1762214400000,"endTime":1762300800000,"tweets":[{"bookmarked":false,"display_text_range":[0,104],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[93,104]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/IamxGO7Tvf","expanded_url":"https://x.com/athleticKoder/status/1985686969318584700/photo/1","id_str":"1985596004754673664","indices":[105,128],"media_key":"3_1985596004754673664","media_url_https":"https://pbs.twimg.com/media/G45BC9LWUAAtLnX.png","type":"photo","url":"https://t.co/IamxGO7Tvf","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":372,"w":678,"resize":"fit"},"medium":{"h":372,"w":678,"resize":"fit"},"small":{"h":372,"w":678,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":372,"width":678,"focus_rects":[{"x":14,"y":0,"w":664,"h":372},{"x":236,"y":0,"w":372,"h":372},{"x":259,"y":0,"w":326,"h":372},{"x":329,"y":0,"w":186,"h":372},{"x":0,"y":0,"w":678,"h":372}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985596004754673664"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985686969318584700","view_count":338,"bookmark_count":0,"created_at":1762259640000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985686969318584700","full_text":"i am sad\n\ni can't get creator's revenue share thanks to stripe's India ban\n\npls do something @nikitabier https://t.co/IamxGO7Tvf","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,44],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/onhHT6yYKs","expanded_url":"https://x.com/athleticKoder/status/1985714157451403399/photo/1","id_str":"1985713748762595328","indices":[45,68],"media_key":"3_1985713748762595328","media_url_https":"https://pbs.twimg.com/media/G46sIjyakAATMHR.jpg","type":"photo","url":"https://t.co/onhHT6yYKs","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1300,"w":2048,"resize":"fit"},"medium":{"h":762,"w":1200,"resize":"fit"},"small":{"h":432,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1866,"width":2940,"focus_rects":[{"x":0,"y":132,"w":2940,"h":1646},{"x":1051,"y":0,"w":1866,"h":1866},{"x":1166,"y":0,"w":1637,"h":1866},{"x":1518,"y":0,"w":933,"h":1866},{"x":0,"y":0,"w":2940,"h":1866}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1985713748762595328"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985714157451403399","view_count":18158,"bookmark_count":5,"created_at":1762266122000,"favorite_count":154,"quote_count":3,"reply_count":7,"retweet_count":24,"user_id_str":"1229293267625234432","conversation_id_str":"1985714157451403399","full_text":"whatsapp web down.\ntime to touch some grass. https://t.co/onhHT6yYKs","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,65],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1985548815735349501","view_count":1452,"bookmark_count":0,"created_at":1762226702000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985536752015327681","full_text":"@ai_for_success how bad is it?\n(in terms of quality of responses)","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1985536752015327681","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[21,34],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"331290294","name":"Chris Best","screen_name":"cjgbest","indices":[0,8]},{"id_str":"636513296","name":"Nikita Bier","screen_name":"nikitabier","indices":[9,20]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/c3MWIgc01r","expanded_url":"https://x.com/athleticKoder/status/1985668047382986778/photo/1","id_str":"1985668036083474432","indices":[35,58],"media_key":"3_1985668036083474432","media_url_https":"https://pbs.twimg.com/media/G46CjuyaYAAlRgw.jpg","type":"photo","url":"https://t.co/c3MWIgc01r","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1298,"w":1179,"resize":"fit"},"medium":{"h":1200,"w":1090,"resize":"fit"},"small":{"h":680,"w":618,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1298,"width":1179,"focus_rects":[{"x":0,"y":638,"w":1179,"h":660},{"x":0,"y":119,"w":1179,"h":1179},{"x":0,"y":0,"w":1139,"h":1298},{"x":0,"y":0,"w":649,"h":1298},{"x":0,"y":0,"w":1179,"h":1298}]},"media_results":{"result":{"media_key":"3_1985668036083474432"}}}]},"favorited":false,"in_reply_to_screen_name":"cjgbest","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985668047382986778","view_count":1263,"bookmark_count":0,"created_at":1762255129000,"favorite_count":1,"quote_count":1,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985464687350485092","full_text":"@cjgbest @nikitabier i noticed too https://t.co/c3MWIgc01r","in_reply_to_user_id_str":"331290294","in_reply_to_status_id_str":"1985464687350485092","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,74],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"9701382","name":"Hamish McKenzie","screen_name":"hamishmckenzie","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"hamishmckenzie","lang":"en","quoted_status_id_str":"1985668047382986778","quoted_status_permalink":{"url":"https://t.co/3gYbtnvGVp","expanded":"https://twitter.com/athletickoder/status/1985668047382986778","display":"x.com/athletickoder/…"},"retweeted":false,"fact_check":null,"id":"1985670006240378928","view_count":245,"bookmark_count":0,"created_at":1762255596000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985468137324888191","full_text":"@hamishmckenzie please support a non stripe payment gateway for India sir.","in_reply_to_user_id_str":"9701382","in_reply_to_status_id_str":"1985468137324888191","is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,40],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1005172607417765888","name":"Botir Khaltaev","screen_name":"botir33751732","indices":[0,14]}]},"favorited":false,"in_reply_to_screen_name":"botir33751732","lang":"en","retweeted":false,"fact_check":null,"id":"1985543264905335104","view_count":632,"bookmark_count":0,"created_at":1762225378000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@botir33751732 all the best paying bills","in_reply_to_user_id_str":"1005172607417765888","in_reply_to_status_id_str":"1985413788456419500","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[10,123],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"36039399","name":"Eduardo Borges","screen_name":"duborges","indices":[0,9]}]},"favorited":false,"in_reply_to_screen_name":"duborges","lang":"en","retweeted":false,"fact_check":null,"id":"1985581419943641165","view_count":519,"bookmark_count":0,"created_at":1762234475000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@duborges i think saas is good for initial versions of product but scaling with it may cost fortune (if volume is involved)","in_reply_to_user_id_str":"36039399","in_reply_to_status_id_str":"1985488193349722260","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,27],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"19914039","name":"iDare e/acc","screen_name":"idare","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"idare","lang":"en","retweeted":false,"fact_check":null,"id":"1985641922392907887","view_count":680,"bookmark_count":0,"created_at":1762248900000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985323628519395635","full_text":"@idare so did your landlord","in_reply_to_user_id_str":"19914039","in_reply_to_status_id_str":"1985567808978329847","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-06","value":88316,"startTime":1762300800000,"endTime":1762387200000,"tweets":[{"bookmarked":false,"display_text_range":[0,37],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/aeOxW7rfnq","expanded_url":"https://x.com/athleticKoder/status/1985942086051643652/photo/1","id_str":"1985942078816432128","indices":[38,61],"media_key":"3_1985942078816432128","media_url_https":"https://pbs.twimg.com/media/G497zHhasAATAtQ.jpg","type":"photo","url":"https://t.co/aeOxW7rfnq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"medium":{"faces":[{"x":306,"y":28,"h":48,"w":48}]},"small":{"faces":[{"x":176,"y":16,"h":27,"w":27}]},"orig":{"faces":[{"x":306,"y":28,"h":48,"w":48}]}},"sizes":{"large":{"h":780,"w":1179,"resize":"fit"},"medium":{"h":780,"w":1179,"resize":"fit"},"small":{"h":450,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":780,"width":1179,"focus_rects":[{"x":0,"y":0,"w":1179,"h":660},{"x":0,"y":0,"w":780,"h":780},{"x":0,"y":0,"w":684,"h":780},{"x":69,"y":0,"w":390,"h":780},{"x":0,"y":0,"w":1179,"h":780}]},"media_results":{"result":{"media_key":"3_1985942078816432128"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1985942086051643652","view_count":19383,"bookmark_count":106,"created_at":1762320464000,"favorite_count":292,"quote_count":0,"reply_count":11,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1985942086051643652","full_text":"gm chat \n\nwhat are you cooking today? https://t.co/aeOxW7rfnq","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,173],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1986048433040363769","view_count":32497,"bookmark_count":478,"created_at":1762345820000,"favorite_count":272,"quote_count":2,"reply_count":13,"retweet_count":19,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"I spent 2 weeks building an eval framework from scratch.\n\nThen I saw how Anthropic and OpenAI actually do evals.\n\nI did literally everything wrong. \n\nHere is what I learned.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048433862394064","view_count":3690,"bookmark_count":6,"created_at":1762345820000,"favorite_count":14,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"My \"eval framework\" looked like this:\n\n- 50 hardcoded test cases in a JSON file\n- Run prompt through model\n- Compare output to expected string\n- Pass if exact match, fail otherwise\n- Print pass rate\n\nShipped it. Felt smart. \n\nIt broke in production within 3 days.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433040363769","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048434764136907","view_count":3880,"bookmark_count":2,"created_at":1762345820000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The bug report: \"Model is way worse after the update\"\n\nI checked the evals: 94% pass rate. Same as before.\n\nSpent 4 hours debugging. The issue? My eval was testing the WRONG thing.\n\nI was checking if output contained \"yes\" or \"no\". Model learned to always say \"yes, but actually no...\"\n\nMy eval said: ✅ Pass\nReality: Completely broken","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048433862394064","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,113],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"http://fullstackagents.substack.com","url":"https://t.co/WiituAqKHr","indices":[64,87]}],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1986048435816890460","view_count":3561,"bookmark_count":1,"created_at":1762345820000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"btw subscribe to my newsletter and don't miss any of my posts -\nhttps://t.co/WiituAqKHr\n\nnow back to the thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048434764136907","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048436882342243","view_count":3532,"bookmark_count":3,"created_at":1762345821000,"favorite_count":7,"quote_count":0,"reply_count":4,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #1: Static test cases\n\nI had 50 hardcoded examples. Model memorized the patterns.\n\nWhat the pros do: Generate test cases programmatically. Vary the phrasing, entities, edge cases. Use templates with random substitutions.\n\nSame capability, infinite test variations. Can't memorize your way out.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048435816890460","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":true,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048437884756364","view_count":3201,"bookmark_count":3,"created_at":1762345821000,"favorite_count":4,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #2: Binary pass/fail\n\nReal models don't output exactly what you expect. They paraphrase. Add context. Sometimes they're \"partially right.\"\n\nMy framework: \"Expected: Paris, Got: Paris, France\" → ❌ FAIL\n\nWhat I should've done: LLM-as-judge to score 0-10. Parse structured outputs. Use semantic similarity for fuzzy matching.\n\nBinary scoring is a lie.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048436882342243","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048439122063480","view_count":2757,"bookmark_count":1,"created_at":1762345821000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #3: Not versioning eval results over time\n\nI ran evals once per deployment. Pass rate looked stable at ~95%.\n\nBut I wasn't tracking WHICH questions passed. Questions that passed last month started failing. New ones started passing.\n\nModel was shifting capabilities, not improving.\n\nI was flying blind.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048437884756364","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048440359342395","view_count":2373,"bookmark_count":0,"created_at":1762345821000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #4: I was measuring outputs, not capabilities\n\nMy eval: \"Does the model return valid JSON?\"\nWhat I should've measured: \"Can the model extract structured data from unstructured text?\"\n\nThe difference? One JSON format change broke all my tests.\n\nEvals should test model capabilities, not output formatting.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048439122063480","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048441324126467","view_count":2014,"bookmark_count":1,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #5: No baseline model for comparison\n\nMy eval said: \"GPT-4 scores 78%\"\n\nIs that good? Bad? I had no idea.\n\nWhat I needed: Run the same eval on GPT-3.5, Claude, Llama. Now I know 78% is either amazing or terrible depending on task difficulty.\n\nWithout baselines, your scores are meaningless.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048440359342395","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048442267763019","view_count":1791,"bookmark_count":0,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Mistake #6: Hardcoding prompts in my test cases\n\nEvery test looked like:\n```\n{\n \"input\": \"Summarize this: ...\",\n \"expected\": \"...\"\n}\n```\n\nChanged the system prompt? All my tests broke.\n\nWhat I learned: Separate test data from prompt templates. Tests should specify WHAT to test, not HOW to prompt.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048441324126467","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,286],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048443249295748","view_count":1753,"bookmark_count":9,"created_at":1762345822000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"Then I read the blogs on Anthropic's evals and OpenAI's Evals framework.\n\nThe principles I missed:\n\n> Model-graded evals (LLM judges LLM outputs)\n> Factored cognition (break complex tasks into atomic skills)\n> Diverse test distributions (not just happy path)\n> Contamination detection (did model see this in training?)\n\nEverything clicked.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048442267763019","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048444243312661","view_count":1429,"bookmark_count":7,"created_at":1762345822000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"A production eval framework needs:\n\nTest case generator (not static JSON files). Multi-dimensional scoring (not binary pass/fail). Baseline comparison across models. Regression tracking per capability. Contamination checks for data leakage. Version control for eval results over time.\n\nMy 2-week project was missing all of this.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048443249295748","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048445161890046","view_count":1299,"bookmark_count":2,"created_at":1762345822000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"The hardest lesson: Your evals will be gamed.\n\nNot intentionally. But models find shortcuts. They learn the eval distribution, not the capability.\n\nThis is why test case generation matters. Why you need adversarial examples. Why you rotate your eval sets.\n\nStatic evals are security theater.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048444243312661","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]},{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[200,215]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[217,226]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[228,237]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048446353068359","view_count":1180,"bookmark_count":7,"created_at":1762345823000,"favorite_count":7,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"So should you build your own eval framework?\n\nBuild if: You need custom evals for domain-specific tasks. You're evaluating proprietary models. You need full control over scoring logic.\n\nUse existing (@braintrustdata, @deepeval, @ragas_io) if: You're evaluating general capabilities. You want battle-tested infrastructure. Time-to-insight matters more than customization.\n\nI wish I'd used Braintrust first, then built custom.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048445161890046","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,311],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048447334486155","view_count":1770,"bookmark_count":8,"created_at":1762345823000,"favorite_count":4,"quote_count":1,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"If I rebuilt my eval framework today:\n\nStart with LLM-as-judge for scoring. Generate test cases programmatically with templates. Track per-capability metrics, not overall pass rate. Run baselines (GPT-4.1, Claude, etc.) for comparison. Version control eval results like code. Build contamination detection from day 1.\n\nWould've saved me 2 weeks of rewrites.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048446353068359","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[36,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1687132266391289856","name":"Braintrust","screen_name":"braintrustdata","indices":[0,15]},{"id_str":"1888058644434141184","name":"DeepEval","screen_name":"deepeval","indices":[16,25]},{"id_str":"1764911957541584896","name":"ragas","screen_name":"ragas_io","indices":[26,35]},{"id_str":"1229293267625234432","name":"anshuman","screen_name":"athleticKoder","indices":[214,228]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1986048448336953578","view_count":1577,"bookmark_count":3,"created_at":1762345823000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@braintrustdata @deepeval @ragas_io That's it for today.\n\nBuilding evals is harder than building the model itself.\n\nBut without good evals, you're deploying blind.\n\nThe key: Test capabilities, not outputs.\n\nFollow @athleticKoder for more and See ya tomorrow!","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1986048447334486155","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,23],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"3220121588","name":"prajwal","screen_name":"prajpawar23","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"prajpawar23","lang":"en","retweeted":false,"fact_check":null,"id":"1986092966533079343","view_count":165,"bookmark_count":0,"created_at":1762356437000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@prajpawar23 you should","in_reply_to_user_id_str":"3220121588","in_reply_to_status_id_str":"1986080688169521392","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[9,15],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1254242216459087873","name":"nyx 👻","screen_name":"Niyxuis","indices":[0,8]}]},"favorited":false,"in_reply_to_screen_name":"Niyxuis","lang":"en","retweeted":false,"fact_check":null,"id":"1986153583059083337","view_count":29,"bookmark_count":0,"created_at":1762370889000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@Niyxuis thisss","in_reply_to_user_id_str":"1254242216459087873","in_reply_to_status_id_str":"1986140209365590365","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,50],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1909287720029138945","name":"Ayush","screen_name":"ayushhcantcode","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ayushhcantcode","lang":"en","retweeted":false,"fact_check":null,"id":"1986153228355117390","view_count":24,"bookmark_count":0,"created_at":1762370805000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986048433040363769","full_text":"@ayushhcantcode very very important for production","in_reply_to_user_id_str":"1909287720029138945","in_reply_to_status_id_str":"1986132672482246725","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,62],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"784830007","name":"Victor M","screen_name":"victormustar","indices":[0,13]},{"id_str":"1217077743654801408","name":"1LittleCoder💻","screen_name":"1littlecoder","indices":[14,27]},{"id_str":"14285438","name":"Jonathan Fischoff","screen_name":"jfischoff","indices":[28,38]}]},"favorited":false,"in_reply_to_screen_name":"victormustar","lang":"en","retweeted":false,"fact_check":null,"id":"1986123825126551565","view_count":411,"bookmark_count":0,"created_at":1762363794000,"favorite_count":1,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1985658210057925096","full_text":"@victormustar @1littlecoder @jfischoff we need this in fal !!!","in_reply_to_user_id_str":"784830007","in_reply_to_status_id_str":"1985658210057925096","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-07","value":0,"startTime":1762387200000,"endTime":1762473600000,"tweets":[]},{"label":"2025-11-08","value":491,"startTime":1762473600000,"endTime":1762560000000,"tweets":[{"bookmarked":false,"display_text_range":[36,93],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1202267633049100291","name":"merve","screen_name":"mervenoyann","indices":[0,12]},{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[13,25]},{"id_str":"100818387","name":"Mishig Davaadorj","screen_name":"mishig25","indices":[26,35]}]},"favorited":false,"in_reply_to_screen_name":"mervenoyann","lang":"en","quoted_status_id_str":"1940082259287069089","quoted_status_permalink":{"url":"https://t.co/jg0pwEOyZD","expanded":"https://twitter.com/julien_c/status/1940082259287069089","display":"x.com/julien_c/statu…"},"retweeted":false,"fact_check":null,"id":"1986750102460153972","view_count":491,"bookmark_count":0,"created_at":1762513111000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986731201873277267","full_text":"@mervenoyann @huggingface @mishig25 this is super useful🕺🏼\nbtw wasn't huggingchat taken down?","in_reply_to_user_id_str":"1202267633049100291","in_reply_to_status_id_str":"1986731201873277267","is_quote_status":1,"is_ai":null,"ai_score":null}]},{"label":"2025-11-09","value":6564,"startTime":1762560000000,"endTime":1762646400000,"tweets":[{"bookmarked":false,"display_text_range":[0,290],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987135547718992059","view_count":2613,"bookmark_count":15,"created_at":1762605008000,"favorite_count":21,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"> be yc founder\n> AI startup\n> gpu is moat\n> burn $400K on GPU cluster \n> hire infra engineer\n> GPUs sit idle 19 hours/day\n> mfw Modal would've cost $2K/month\n> mfw I could've shipped 3 products instead of managing Kubernetes\n\nHere's how serverless can save you $$$:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[15,29],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"499018916","name":"Katelyn Lesse","screen_name":"katelyn_lesse","indices":[0,14]},{"id_str":"2248766490","name":"Abhishek (key/value)","screen_name":"StalwartCoder","indices":[15,29]}]},"favorited":false,"in_reply_to_screen_name":"katelyn_lesse","lang":"qam","retweeted":false,"fact_check":null,"id":"1986979349929807888","view_count":653,"bookmark_count":0,"created_at":1762567767000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1986879086506221945","full_text":"@katelyn_lesse @StalwartCoder","in_reply_to_user_id_str":"499018916","in_reply_to_status_id_str":"1986879086506221945","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[8,42],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1552580106257928192","name":"fofr","screen_name":"fofrAI","indices":[0,7]},{"id_str":"39622874","name":"fal","screen_name":"fal","indices":[38,42]}]},"favorited":false,"in_reply_to_screen_name":"fofrAI","lang":"en","retweeted":false,"fact_check":null,"id":"1987200681246400703","view_count":596,"bookmark_count":0,"created_at":1762620537000,"favorite_count":2,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987105424580202657","full_text":"@fofrAI don't tell me you are joining @fal","in_reply_to_user_id_str":"1552580106257928192","in_reply_to_status_id_str":"1987105424580202657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135548490678703","view_count":170,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Everyone obsesses over \"owning your infrastructure.\"\nMeanwhile, serverless GPU platforms solved the hardest parts:\n\n> Cold start optimization\n> Auto-scaling\n> Multi-region deployment\n> GPU multiplexing\n\nYou get enterprise-grade infra with 10 lines of code.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135547718992059","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135549283422644","view_count":157,"bookmark_count":0,"created_at":1762605008000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The magic of serverless GPUs isn't just \"no DevOps.\"\n\nIt's the cost model:\nYou pay $0.0008/sec for H100 time. Your agent runs for 3 seconds = $0.0024. With 10K requests/day = $24/day.\n\nTraditional GPU rental: $2.50/hour × 24 hours = $60/day at 4% utilization.\n\nThat's the unlock.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135548490678703","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,289],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]},{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[63,69]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135550185164911","view_count":155,"bookmark_count":2,"created_at":1762605009000,"favorite_count":2,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Here's what's actually possible with Serverless platforms like @modal:\n\n> Deploy a Llama 70B endpoint in 5 minutes\n> Auto-scale from 0 to 100 GPUs based on demand\n> Run batch jobs on 1000 GPUs without infrastructure\n> Build multi-step agents with persistent state\n\nThis used to require 6 engineers and 3 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135549283422644","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,114],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[68,91]}],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987135551007260858","view_count":131,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread -","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135550185164911","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,296],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552009707743","view_count":138,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The serverless sweet spot:\n\nBursty traffic patterns where traditional GPUs sit idle 80% of the time.\n\nExample: Document processing API\n> 1000 requests at 9am\n> 50 requests at 2pm\n> 2000 requests at 5pm\n\nServerless: Pay for 3050 invocations. Dedicated: Pay for 24 hours whether you use it or not.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135551007260858","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135552840192220","view_count":118,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Cold starts are NOT the enemy for most AI apps.\n\nModal's cold start: 2-5s with container caching. Your user sends a document for analysis. \n\nThey wait 30s for results anyway.\n\nThat 3s cold start? Irrelevant.\n\nWhat matters: You didn't pay for 23 hours of idle GPU time.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552009707743","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135553767186508","view_count":113,"bookmark_count":0,"created_at":1762605009000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Agentic workflows are where serverless shines.\n\nYour coding agent:\n\n> Analyzes codebase: 8 seconds\n> Generates solution: 12 seconds\n> Runs tests: 15 seconds\n\nTotal: 35 seconds of GPU time across 3 minutes. With dedicated GPUs, you paid for 3 minutes. With Modal, you paid for 35 seconds.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135552840192220","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,294],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135554849218714","view_count":113,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"\"But what about state between agent steps?\"\n\nModal volumes + class-based functions solve this.\n\n> Warm containers persist for 5 minutes. \n> Your agent's 10 tool calls reuse the same container. \n> Only the first call has cold start.\n> State lives in memory across invocations. \n\nYou're not round-tripping to Redis.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135553767186508","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,282],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135555788771338","view_count":158,"bookmark_count":0,"created_at":1762605010000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"The batch processing superpower:\n\nDo you need to embed 10M documents? Yes?\n\nModal: Spin up 500 GPUs, process in 20 minutes, shut down. Cost: $80.\nYour infrastructure: Would take 3 days on 8 GPUs. Cost: $576 + engineering time to parallelize.\nMap-reduce at GPU scale with zero orchestration.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135554849218714","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,256],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135557034459264","view_count":129,"bookmark_count":0,"created_at":1762605010000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal Use Modal/serverless when you have:\n\n- Unpredictable traffic (0-100 req/sec swings)\n- Batch workloads that spike monthly\n- <10K requests/day per model\n- Multi-model serving (20+ different models)\n- Development velocity matters more than $/request","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135555788771338","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,302],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135558582247657","view_count":109,"bookmark_count":0,"created_at":1762605011000,"favorite_count":1,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"Real production patterns that work:\n\n> RAG pipelines with hourly document ingestion. \n> Multi-step agents that run 100-500 times/day. \n> Image generation APIs with weekend traffic spikes. \n> Fine-tuning jobs that run weekly. \n> Research experiments across 50+ model variants.\n\nAll terrible fits for dedicated GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135557034459264","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135559450476916","view_count":613,"bookmark_count":2,"created_at":1762605011000,"favorite_count":4,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal The economics flip at scale:\n\n> Under 50K requests/day: Serverless wins (pay per use). \n> 50K-200K/day: Hybrid (dedicated for base load, serverless for spikes). \n> Over 200K/day: Dedicated wins (utilization high enough).\n\nBut most of AI products are under 50K/day.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135558582247657","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[7,245],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1551987185372512263","name":"Modal","screen_name":"modal","indices":[0,6]}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987135560352239849","view_count":598,"bookmark_count":1,"created_at":1762605011000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987135547718992059","full_text":"@modal That's it.\nServerless GPUs aren't \"good enough until you scale.\" \n\nThey're the optimal architecture for most AI applications.\n\nThe question isn't \"when do I move off serverless?\"\n\nIt's \"why would I manage GPUs when I could ship products?\"","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987135559450476916","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-10","value":1641,"startTime":1762646400000,"endTime":1762732800000,"tweets":[{"bookmarked":false,"display_text_range":[0,45],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/YXG6qk9qMo","expanded_url":"https://x.com/athleticKoder/status/1987408351177941412/photo/1","id_str":"1987408345771483136","indices":[46,69],"media_key":"3_1987408345771483136","media_url_https":"https://pbs.twimg.com/media/G5SxXFlbIAA3hYn.jpg","type":"photo","url":"https://t.co/YXG6qk9qMo","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"medium":{"faces":[{"x":799,"y":234,"h":63,"w":63}]},"small":{"faces":[{"x":460,"y":134,"h":36,"w":36}]},"orig":{"faces":[{"x":799,"y":234,"h":63,"w":63}]}},"sizes":{"large":{"h":354,"w":1179,"resize":"fit"},"medium":{"h":354,"w":1179,"resize":"fit"},"small":{"h":204,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":354,"width":1179,"focus_rects":[{"x":0,"y":0,"w":632,"h":354},{"x":0,"y":0,"w":354,"h":354},{"x":0,"y":0,"w":311,"h":354},{"x":59,"y":0,"w":177,"h":354},{"x":0,"y":0,"w":1179,"h":354}]},"media_results":{"result":{"media_key":"3_1987408345771483136"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987408351177941412","view_count":1603,"bookmark_count":1,"created_at":1762670049000,"favorite_count":7,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987408351177941412","full_text":"this kinda messages from college batchmates🫶🏻 https://t.co/YXG6qk9qMo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[16,84],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1618975370488999936","name":"AshutoshShrivastava","screen_name":"ai_for_success","indices":[0,15]}]},"favorited":false,"in_reply_to_screen_name":"ai_for_success","lang":"en","retweeted":false,"fact_check":null,"id":"1987409501264437532","view_count":38,"bookmark_count":0,"created_at":1762670324000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987334706942415338","full_text":"@ai_for_success who knows it is released as Gemini pro image than gemini flash image","in_reply_to_user_id_str":"1618975370488999936","in_reply_to_status_id_str":"1987334706942415338","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-11","value":215512,"startTime":1762732800000,"endTime":1762819200000,"tweets":[{"bookmarked":false,"display_text_range":[0,202],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1987860394912731440","view_count":144396,"bookmark_count":1316,"created_at":1762777825000,"favorite_count":1097,"quote_count":7,"reply_count":36,"retweet_count":71,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"Just use Vector Database\"\n\nUntil you need:\n- 100M+ vectors indexed\n- <10ms p95 search latency\n- $50/month (not $500/month)\n\nThen you build your own vector database.\n\nHere's what that actually means:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,138],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860395713761696","view_count":8758,"bookmark_count":6,"created_at":1762777825000,"favorite_count":26,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Most engineers think vector DB = \n- Install FAISS\n- Wrap with Flask\n- Add some metadata filtering\n- Done\n\nReality hits around 10M vectors.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860394912731440","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,216],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860396489805890","view_count":8404,"bookmark_count":0,"created_at":1762777825000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"You're not building a system to search ONE index for ONE user.\n\nYou're building a system that handles THOUSANDS of concurrent searches, with filters, hybrid search, and real-time updates.\n\nCompletely different beast.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860395713761696","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,251],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860397341151584","view_count":8012,"bookmark_count":21,"created_at":1762777826000,"favorite_count":36,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"What you actually need:\n\n> HNSW index builder that doesn't block writes\n> Metadata filtering that scales with cardinality\n> Distributed sharding across index size\n> Real-time upsert pipeline without rebuild\n\nAnd that's just the foundation.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860396489805890","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860398163345579","view_count":7254,"bookmark_count":4,"created_at":1762777826000,"favorite_count":17,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Your <10ms p95 search breaks down as:\n\n- Network: 2-3ms (fixed)\n- Metadata pre-filter: 1-3ms (explodes with complex filters)\n- ANN search: 3-8ms (depends on ef_search)\n- Post-filtering: 1-2ms\n\nYou have 0-2ms buffer. \"Just scale horizontally\" doesn't work.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860397341151584","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,107],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fullstackagents.substack.com","expanded_url":"https://fullstackagents.substack.com/","url":"https://t.co/ZZMuGnenjh","indices":[61,84]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/xO7utN2JQD","expanded_url":"https://x.com/athleticKoder/status/1987860398956007461/photo/1","id_str":"1987860393029443584","indices":[108,131],"media_key":"3_1987860393029443584","media_url_https":"https://pbs.twimg.com/media/G5ZMfs2WoAAORLa.jpg","type":"photo","url":"https://t.co/xO7utN2JQD","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":480,"w":920,"resize":"fit"},"medium":{"h":480,"w":920,"resize":"fit"},"small":{"h":355,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":480,"width":920,"focus_rects":[{"x":0,"y":0,"w":857,"h":480},{"x":0,"y":0,"w":480,"h":480},{"x":0,"y":0,"w":421,"h":480},{"x":40,"y":0,"w":240,"h":480},{"x":0,"y":0,"w":920,"h":480}]},"media_results":{"result":{"media_key":"3_1987860393029443584"}}}]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1987860398956007461","view_count":7732,"bookmark_count":14,"created_at":1762777826000,"favorite_count":16,"quote_count":0,"reply_count":1,"retweet_count":2,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"get my posts in your inbox daily (for free) by subscribing:\n\nhttps://t.co/ZZMuGnenjh \n\nnow back to thread - https://t.co/xO7utN2JQD","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398163345579","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,263],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860400067485877","view_count":5391,"bookmark_count":4,"created_at":1762777826000,"favorite_count":11,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The first principle of vector search:\n\nRecall@10 ≠ Recall@100\n\nHNSW with ef_search=50 gives 95% recall@10 but 78% recall@100. Your users want top-100 results with metadata filters.\n\nNow your recall drops to 60%. This is why \"FAISS works fine\" fails in production.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860398956007461","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,205],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401099268406","view_count":4681,"bookmark_count":4,"created_at":1762777826000,"favorite_count":12,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Index memory is the silent killer.\n\n100M vectors × 768 dims × 4 bytes = 307GB just for vectors.\nHNSW graph adds 2-3x that.\nYou're at 900GB memory for ONE index.\n\nAnd you have 20 different embedding models.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860400067485877","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860401967477037","view_count":4203,"bookmark_count":5,"created_at":1762777827000,"favorite_count":9,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"\"We need hybrid search with BM25 + vector + metadata filters\"\n\nNow your platform needs:\n- Inverted index alongside HNSW\n- Score fusion that doesn't kill latency\n- Query planning for filter pushdown\n- Cross-encoder reranking in <5ms\n\nThis is where 80% of custom vector DBs fail.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401099268406","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,258],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860402798051731","view_count":3607,"bookmark_count":4,"created_at":1762777827000,"favorite_count":5,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Use 3rd party when you're under 10M vectors, using standard embeddings, can tolerate 50ms+ latency, and cost per query is 100x raw compute.\n\nBuild your own when you have 50M+ vectors, custom embeddings, need sub-15ms p95, or when you're spending $500+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860401967477037","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,279],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860403620041157","view_count":3440,"bookmark_count":4,"created_at":1762777827000,"favorite_count":6,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Let's do the math:\n\nManaged Vector DB at $70/million vectors/month + $0.10 per 1K queries:\n100M vectors + 10M queries/month = $7,000 + $1,000 = $8,000\n\nYour self-hosted setup with 2TB RAM machine at $1,000/month:\n= $1,000 compute\n\nBut add $80K engineering, $5K/month maintenance, 8 month break-even.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860402798051731","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860404647686323","view_count":3117,"bookmark_count":5,"created_at":1762777827000,"favorite_count":8,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"Production vector DBs have four layers:\n\n1. Query parsing (filter optimization, query planning, type checking).\n2. Search execution (HNSW navigator, hybrid fusion, distributed scatter-gather).\n3. Index management (real-time updates, compaction, shard rebalancing).\n4. Observability (latency per component, recall metrics, memory pressure).\n\nMost build layer 2 only.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860403620041157","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,284],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860405444633015","view_count":3365,"bookmark_count":10,"created_at":1762777827000,"favorite_count":12,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"The production checklist:\n\n> Use HNSW, not flat index\n> Implement filter pushdown from day one\n> Monitor recall AND latency\n> Auto-shard based on index size, not CPU\n> Track $/query AND queries/sec\n> Have hot-reload for index updates\n> Plan for 50M+ vector growth","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860404647686323","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,170],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1987860406673584332","view_count":3152,"bookmark_count":11,"created_at":1762777828000,"favorite_count":15,"quote_count":0,"reply_count":3,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1987860394912731440","full_text":"That's it.\n\nBuilding a vector database is a 8-month project with memory costs everywhere.\n\nBut at 100M+ vectors? Pays for itself in 3 months.\n\nKnow when to build vs rent.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1987860405444633015","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-12","value":247,"startTime":1762819200000,"endTime":1762905600000,"tweets":[{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677795090432","indices":[66,89],"media_key":"3_1988279677795090432","media_url_https":"https://pbs.twimg.com/media/G5fJ1SUawAA7kzX.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1319,"w":1097,"resize":"fit"},"medium":{"h":1200,"w":998,"resize":"fit"},"small":{"h":680,"w":566,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1319,"width":1097,"focus_rects":[{"x":0,"y":705,"w":1097,"h":614},{"x":0,"y":222,"w":1097,"h":1097},{"x":0,"y":68,"w":1097,"h":1251},{"x":32,"y":0,"w":660,"h":1319},{"x":0,"y":0,"w":1097,"h":1319}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677795090432"}}},{"display_url":"pic.x.com/3MkWUPU4WX","expanded_url":"https://x.com/athleticKoder/status/1988279690273189936/photo/1","id_str":"1988279677816016896","indices":[66,89],"media_key":"3_1988279677816016896","media_url_https":"https://pbs.twimg.com/media/G5fJ1SZaEAASb4l.jpg","type":"photo","url":"https://t.co/3MkWUPU4WX","ext_media_availability":{"status":"Available"},"features":{"all":{"tags":[{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"},{"user_id":"4686835494","name":"Vercel","screen_name":"vercel","type":"user"}]},"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":2048,"w":1536,"resize":"fit"},"medium":{"h":1200,"w":900,"resize":"fit"},"small":{"h":680,"w":510,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2048,"width":1536,"focus_rects":[{"x":0,"y":0,"w":1536,"h":860},{"x":0,"y":0,"w":1536,"h":1536},{"x":0,"y":0,"w":1536,"h":1751},{"x":0,"y":0,"w":1024,"h":2048},{"x":0,"y":0,"w":1536,"h":2048}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988279677816016896"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988279690273189936","view_count":247,"bookmark_count":1,"created_at":1762877793000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988279690273189936","full_text":"\"mom how did we get so rich?\"\n\n\"your deployed b2b saas on vercel\" https://t.co/3MkWUPU4WX","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-13","value":216683,"startTime":1762905600000,"endTime":1762992000000,"tweets":[{"bookmarked":false,"display_text_range":[0,197],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1988585174347489742","view_count":146241,"bookmark_count":1072,"created_at":1762950626000,"favorite_count":1089,"quote_count":4,"reply_count":31,"retweet_count":77,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“Just rent a GPU for training”\n\nUntil you need:\n- Multi-node training for 70B+ models\n- $5/hour per GPU (not $30/hour)\n- 90%+ GPU utilization\n\nThen you build your own ml infra.\n\nHere’s the reality:","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[14,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1191207924","name":"Erik Bernhardsson","screen_name":"bernhardsson","indices":[0,13]}]},"favorited":false,"in_reply_to_screen_name":"bernhardsson","lang":"en","retweeted":false,"fact_check":null,"id":"1988657991860818103","view_count":394,"bookmark_count":0,"created_at":1762967987000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988647374605131837","full_text":"@bernhardsson this guy gets it!!","in_reply_to_user_id_str":"1191207924","in_reply_to_status_id_str":"1988647374605131837","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,163],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585175324742066","view_count":8884,"bookmark_count":8,"created_at":1762950626000,"favorite_count":50,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Most ML engineers think training infrastructure =\n\n- Rent some A100s\n- Install PyTorch\n- Run training script\n- Scale with more GPUs\n\nThe pain starts around 8 GPUs.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585174347489742","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,232],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585176528494790","view_count":8576,"bookmark_count":9,"created_at":1762950626000,"favorite_count":60,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Remember: You’re not training ONE model on ONE GPU.\n\nYou’re orchestrating DOZENS of experiments across hundreds of GPUs with checkpointing, fault tolerance, and resource sharing.\n\nThat’s a scheduling problem, not a training problem.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585175324742066","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,242],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585177413484600","view_count":8284,"bookmark_count":11,"created_at":1762950627000,"favorite_count":67,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"What you actually need:\n\nJob scheduler that understands GPU topology Distributed checkpoint manager that doesn’t waste bandwidth Network fabric optimized for all-reduce Elastic training that handles node failures\n\nThis is the actual platform.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585176528494790","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,288],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585178462085227","view_count":8035,"bookmark_count":5,"created_at":1762950627000,"favorite_count":33,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Your training cost breakdown at scale:\n\n> Compute: $10/GPU-hour (you pay $30 on cloud)\n> Data transfer: $2/TB (kills you with large datasets)\n> Storage: $0.02/GB-month (checkpoints add up fast)\n> Network: Included (but becomes bottleneck)\n\nThe hidden cost? Idle GPU time while debugging.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585177413484600","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,278],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585179519029299","view_count":6920,"bookmark_count":9,"created_at":1762950627000,"favorite_count":37,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The first principle of distributed training:\n\nBandwidth >> Compute for models over 10B params\n\nRing all-reduce needs 2(N-1)/N bandwidth efficiency. With 64 GPUs on 3.2 Tbps InfiniBand, you max out at 200GB/sec actual throughput.\n\nThis is why “just add more GPUs” plateaus.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585178462085227","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,228],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585181201002798","view_count":6038,"bookmark_count":4,"created_at":1762950627000,"favorite_count":34,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Checkpoint storage eats you alive.\n\nTraining Llama 70B:\n- 140GB model weights\n- Optimizer states: 280GB\n- Checkpoints every 1K steps\n- 30 checkpoints = 12.6TB\n- One training run = $250 in storage. \n\nYou run 50 experiments/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585179519029299","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585183176433810","view_count":5405,"bookmark_count":8,"created_at":1762950628000,"favorite_count":23,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"“We need to train 10 models simultaneously with different hyperparameters”\n\nNow your platform needs:\n\n- Gang scheduling for multi-GPU jobs\n- Spot instance preemption handling\n- Shared dataset caching across jobs\n- Priority queues with fairness\n\n90% of DIY platforms can’t do this.\n\n> Use cloud when you’re training <5 models/month, using standard frameworks, can tolerate random failures, and engineering time costs more than GPU markup.\n\n> Build your own when you train 20+ models/month, need 70B+ params, want <$10/GPU-hour, or are spending $50K+/month.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585181201002798","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,275],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585184589922329","view_count":4753,"bookmark_count":11,"created_at":1762950628000,"favorite_count":30,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The actual math:\n\nAWS p5.48xlarge (8× H100): $98/hour 100 training runs × 48 hours = $470,400/year\n\nYour bare-metal with 64× H100s at $2.5M upfront: Depreciation + power = $150K/year at 60% utilization = $312,500\n\nPlus $200K engineer, $50K maintenance. Break-even: 18 months.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585183176433810","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585185781207305","view_count":4375,"bookmark_count":13,"created_at":1762950629000,"favorite_count":18,"quote_count":0,"reply_count":1,"retweet_count":1,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"Production training platforms have four layers:\n\n- Orchestration (job queue, gang scheduler, resource manager). \n- Execution (distributed runtime, checkpoint manager, fault handler). \n- Storage (dataset cache, checkpoint store, artifact registry). \n- Telemetry (GPU util, training metrics, cost per epoch).\n\nMost build layer 2, skip the rest.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585184589922329","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585186750005676","view_count":4581,"bookmark_count":19,"created_at":1762950629000,"favorite_count":26,"quote_count":0,"reply_count":2,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"The production checklist:\n\n- Use SLURM or Kubernetes with GPU scheduling \n- Implement automatic checkpoint resume \n- Monitor MFU (model FLOPS utilization), not just GPU% \n- Auto-scale based on queue depth \n- Track $/epoch AND samples/sec \n- Have spot instance fallback ready \n- Plan for node failures mid-training","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585185781207305","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,193],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"in_reply_to_screen_name":"athleticKoder","lang":"en","retweeted":false,"fact_check":null,"id":"1988585187710505453","view_count":4197,"bookmark_count":19,"created_at":1762950629000,"favorite_count":36,"quote_count":1,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"That’s it.\n\nBuilding training infrastructure is a 9-month project with upfront hardware costs.\n\nBut at 100+ training runs/month? ROI in 12 months.\n\nThe decision point is your training velocity.","in_reply_to_user_id_str":"1229293267625234432","in_reply_to_status_id_str":"1988585186750005676","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-14","value":128518,"startTime":1762992000000,"endTime":1763078400000,"tweets":[{"bookmarked":false,"display_text_range":[0,88],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1984987985696408026","quoted_status_permalink":{"url":"https://t.co/22cPpSCL6r","expanded":"https://twitter.com/w0rdgenerator/status/1984987985696408026","display":"x.com/w0rdgenerator/…"},"retweeted":false,"fact_check":null,"id":"1988883861548183945","view_count":4505,"bookmark_count":2,"created_at":1763021838000,"favorite_count":18,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988883861548183945","full_text":"looking for a single room in flat in Koramangala/HSR/Indiranagar.\n\nBengaluru people hmu.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,243],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/zQuAgAtsoH","expanded_url":"https://x.com/athleticKoder/status/1988937605539590147/photo/1","id_str":"1988937390795431936","indices":[244,267],"media_key":"3_1988937390795431936","media_url_https":"https://pbs.twimg.com/media/G5ogBOLaoAAlEoj.png","type":"photo","url":"https://t.co/zQuAgAtsoH","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1152,"w":2048,"resize":"fit"},"medium":{"h":675,"w":1200,"resize":"fit"},"small":{"h":383,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":2160,"width":3840,"focus_rects":[{"x":0,"y":10,"w":3840,"h":2150},{"x":840,"y":0,"w":2160,"h":2160},{"x":973,"y":0,"w":1895,"h":2160},{"x":1380,"y":0,"w":1080,"h":2160},{"x":0,"y":0,"w":3840,"h":2160}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1988937390795431936"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1988937605539590147","view_count":122963,"bookmark_count":486,"created_at":1763034652000,"favorite_count":957,"quote_count":1,"reply_count":139,"retweet_count":40,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"We are hiring for 2 Machine Learning Engineers in Bangalore office. \n\nyou'll work directly with me on super impactful projects \n\nDrop your best work in the comments👇 and I will personally reach out to you if you are a fit.\n\nPlease Don't DM! https://t.co/zQuAgAtsoH","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[0,199],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/Z08PdHJ6yk","expanded_url":"https://x.com/athleticKoder/status/1989042374941765965/photo/1","id_str":"1989041991213281284","indices":[200,223],"media_key":"3_1989041991213281284","media_url_https":"https://pbs.twimg.com/media/G5p_JxGacAQGwXv.jpg","type":"photo","url":"https://t.co/Z08PdHJ6yk","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1303,"w":2048,"resize":"fit"},"medium":{"h":763,"w":1200,"resize":"fit"},"small":{"h":433,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1864,"width":2930,"focus_rects":[{"x":0,"y":0,"w":2930,"h":1641},{"x":533,"y":0,"w":1864,"h":1864},{"x":648,"y":0,"w":1635,"h":1864},{"x":999,"y":0,"w":932,"h":1864},{"x":0,"y":0,"w":2930,"h":1864}]},"allow_download_status":{"allow_download":true},"media_results":{"result":{"media_key":"3_1989041991213281284"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989042374941765965","view_count":755,"bookmark_count":3,"created_at":1763059631000,"favorite_count":3,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989042374941765965","full_text":"honestly i do use gpt/claude for improving my writing \n\nbut i find chat annoying and unproductive at times.\n\nso i am building a better workspace for myself for writing, prototyping and playing around https://t.co/Z08PdHJ6yk","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,32],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"2776199244","name":"William Falcon ⚡️","screen_name":"_willfalcon","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"_willfalcon","lang":"en","retweeted":false,"fact_check":null,"id":"1988881603611816173","view_count":292,"bookmark_count":0,"created_at":1763021300000,"favorite_count":0,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988585174347489742","full_text":"@_willfalcon full stack founder🫡","in_reply_to_user_id_str":"2776199244","in_reply_to_status_id_str":"1988697545422614671","is_quote_status":0,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[13,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"95807398","name":"abhishek","screen_name":"abhi1thakur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"abhi1thakur","lang":"tl","retweeted":false,"fact_check":null,"id":"1988968622879043809","view_count":3,"bookmark_count":0,"created_at":1763042047000,"favorite_count":0,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1988937605539590147","full_text":"@abhi1thakur hahaha","in_reply_to_user_id_str":"95807398","in_reply_to_status_id_str":"1988944303113359729","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-15","value":133,"startTime":1763078400000,"endTime":1763164800000,"tweets":[{"bookmarked":false,"display_text_range":[13,22],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1795359634049339392","name":"Abhishek🌱","screen_name":"Abhishekcur","indices":[0,12]}]},"favorited":false,"in_reply_to_screen_name":"Abhishekcur","lang":"in","retweeted":false,"fact_check":null,"id":"1989313589140983928","view_count":133,"bookmark_count":0,"created_at":1763124293000,"favorite_count":1,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989313154355204324","full_text":"@Abhishekcur kubuntuuu","in_reply_to_user_id_str":"1795359634049339392","in_reply_to_status_id_str":"1989313154355204324","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-16","value":5362,"startTime":1763164800000,"endTime":1763251200000,"tweets":[{"bookmarked":false,"display_text_range":[0,19],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"in","quoted_status_id_str":"1989396107085189233","quoted_status_permalink":{"url":"https://t.co/LBiTfTTeka","expanded":"https://twitter.com/barralexandra/status/1989396107085189233","display":"x.com/barralexandra/…"},"retweeted":false,"fact_check":null,"id":"1989532478554685674","view_count":5039,"bookmark_count":5,"created_at":1763176481000,"favorite_count":35,"quote_count":0,"reply_count":1,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989532478554685674","full_text":"tldr: data labelers","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null},{"bookmarked":false,"display_text_range":[28,43],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1060588105361633280","name":"Antaripa Saha","screen_name":"doesdatmaksense","indices":[0,16]},{"id_str":"1644135335734157313","name":"Factory","screen_name":"FactoryAI","indices":[17,27]},{"id_str":"1353833967221313539","name":"Matan Grinberg","screen_name":"matanSF","indices":[28,36]}]},"favorited":false,"in_reply_to_screen_name":"doesdatmaksense","lang":"en","retweeted":false,"fact_check":null,"id":"1989723590644887707","view_count":323,"bookmark_count":0,"created_at":1763222045000,"favorite_count":3,"quote_count":0,"reply_count":0,"retweet_count":0,"user_id_str":"1229293267625234432","conversation_id_str":"1989707640532767031","full_text":"@doesdatmaksense @FactoryAI @matanSF cooked","in_reply_to_user_id_str":"1060588105361633280","in_reply_to_status_id_str":"1989707640532767031","is_quote_status":0,"is_ai":null,"ai_score":null}]},{"label":"2025-11-17","value":0,"startTime":1763251200000,"endTime":1763337600000,"tweets":[]},{"label":"2025-11-18","value":0,"startTime":1763337600000,"endTime":1763424000000,"tweets":[]}]},"interactions":{"users":[{"created_at":1443053347000,"uid":"3664641493","id":"3664641493","screen_name":"Juicecountyeth","name":"🧪Juice 🧃","friends_count":1574,"followers_count":812,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1675366634188595200/aQEsh6xm_normal.jpg","description":"OG GPU seller.","entities":{"description":{"urls":[]}},"interactions":2},{"created_at":1432022453000,"uid":"3220121588","id":"3220121588","screen_name":"prajpawar23","name":"prajwal","friends_count":2626,"followers_count":859,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1956972838839078913/VdBrWn_q_normal.jpg","description":"22 // ml @qualcomm // prev - gpu engg @amd","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"prajpawar.com","expanded_url":"http://prajpawar.com","url":"https://t.co/HU6vdIoxm1","indices":[0,23]}]}},"interactions":2},{"created_at":1703110076000,"uid":"1737595658909908992","id":"1737595658909908992","screen_name":"RaviRaiML","name":"Ravi | ML Engineer","friends_count":260,"followers_count":1220,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1983911680657473536/yIgKdn0P_normal.jpg","description":"Freelance ML Engineer | Fixing AI products with MLOps","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"ravinderrai.com","expanded_url":"https://ravinderrai.com","url":"https://t.co/cSPDGQswR7","indices":[0,23]}]}},"interactions":2},{"created_at":1568011354000,"uid":"1170950527200292864","id":"1170950527200292864","screen_name":"hrishikesshhhh","name":"Hrishikesh Nikam","friends_count":914,"followers_count":288,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1984199310645600257/gMy9jS48_normal.jpg","description":"20 || 6'2 || CS-22 || Software Developer|| Full-Stack Dev","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"linktr.ee/hrishikesshhhh","expanded_url":"https://linktr.ee/hrishikesshhhh","url":"https://linktr.ee/hrishikesshhhh","indices":[0,23]}]}},"interactions":2},{"created_at":1559131659000,"uid":"1133706467507363840","id":"1133706467507363840","screen_name":"HyunRish","name":"rish_hyun","friends_count":27,"followers_count":0,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1245779132991979520/b3oThewl_normal.jpg","description":"another digital footprint 👣","entities":{"description":{"urls":[]}},"interactions":2},{"created_at":1260413786000,"uid":"95807398","id":"95807398","screen_name":"abhi1thakur","name":"abhishek","friends_count":1094,"followers_count":94889,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1976303094146224128/gXXFSwQw_normal.jpg","description":"AI Search @vespaengine, ex-@huggingface, World's First 4x GM @kaggle, YouTube 120k+: https://t.co/BHnem8fTu5","entities":{"description":{"urls":[{"display_url":"youtube.com/@abhishekkrtha…","expanded_url":"http://youtube.com/@abhishekkrthakur","url":"https://t.co/BHnem8fTu5","indices":[85,108]}]},"url":{"urls":[{"display_url":"linkedin.com/in/abhi1thakur/","expanded_url":"https://www.linkedin.com/in/abhi1thakur/","url":"https://t.co/uEbTUBVvQL","indices":[0,23]}]}},"interactions":1},{"created_at":1502905292000,"uid":"897875988222271488","id":"897875988222271488","screen_name":"_PaperMoose_","name":"Ryan","friends_count":1518,"followers_count":1051,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1988392745619386378/Eh0X86O2_normal.jpg","description":"Built ARC-AGI 2 evals @gregkamrad. Ex-CTO @ DentoAI. Built https://t.co/JtLGCSctWE for Novo Nordisk. Building automated\n reliability testing for healthcare","entities":{"description":{"urls":[{"display_url":"findmymedsapp.com","expanded_url":"http://findmymedsapp.com","url":"https://t.co/JtLGCSctWE","indices":[60,83]}]},"url":{"urls":[{"display_url":"vunda.ai","expanded_url":"https://vunda.ai","url":"https://t.co/8AbP5xJC34","indices":[0,23]}]}},"interactions":1},{"created_at":1502413899000,"uid":"895814938995957760","id":"895814938995957760","screen_name":"threadreaderapp","name":"Thread Reader App","friends_count":1234,"followers_count":785601,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1813321453183590400/lc6jtC3Y_normal.jpg","description":"I'm a 🤖 to help you read threads more easily. Reply to any tweet of a thread and mention me with the \"unroll\" keyword and I'll give you a link back 😀","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"ThreadReaderApp.com","expanded_url":"https://ThreadReaderApp.com","url":"https://t.co/pBpAT7Uy1z","indices":[0,23]}]}},"interactions":1},{"created_at":1500620733000,"uid":"888293854960533504","id":"888293854960533504","screen_name":"leodoan_","name":"Thanh Doan","friends_count":430,"followers_count":349,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1961238073263751168/qrWgPrpN_normal.jpg","description":"software engineer. crafting impactful things to open source world | building overwrite: https://t.co/PCgG9ZSlbu | changelogs: https://t.co/SisBYPqOo0","entities":{"description":{"urls":[{"display_url":"mnismt.com/overwrite","expanded_url":"http://mnismt.com/overwrite","url":"https://t.co/PCgG9ZSlbu","indices":[88,111]},{"display_url":"changelogs.directory","expanded_url":"http://changelogs.directory","url":"https://t.co/SisBYPqOo0","indices":[126,149]}]},"url":{"urls":[{"display_url":"doantranminhthanh.com","expanded_url":"https://doantranminhthanh.com/","url":"https://t.co/v6xaz5R5dB","indices":[0,23]}]}},"interactions":1},{"created_at":1494595384000,"uid":"863021710412570625","id":"863021710412570625","screen_name":"Hunter60505004","name":"Hunter","friends_count":627,"followers_count":92,"profile_image_url_https":"https://abs.twimg.com/sticky/default_profile_images/default_profile_normal.png","description":"","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1492146357000,"uid":"852749744841478144","id":"852749744841478144","screen_name":"Hari1275866","name":"bidda","friends_count":1663,"followers_count":84,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1972577884301910016/rRqzYFft_normal.jpg","description":"GenAI & Data engineering | Tech Enthusiast | Programmer | keen to learn new technology |","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1482037687000,"uid":"810350911465799680","id":"810350911465799680","screen_name":"abtw3t","name":"Ab","friends_count":307,"followers_count":108,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1959493931168903168/azbWngMk_normal.png","description":"robotics + ml | @SAEIntl student","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1459845243000,"uid":"717269055061630977","id":"717269055061630977","screen_name":"dhruv2038","name":"Dhruv","friends_count":5361,"followers_count":4183,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1835847331108773888/2F4xtKIS_normal.jpg","description":".","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1334693025000,"uid":"556239875","id":"556239875","screen_name":"LeventTZ1","name":"LTZ","friends_count":247,"followers_count":24,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1594509331441016864/-al2cc_a_normal.jpg","description":"","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1318500234000,"uid":"390011033","id":"390011033","screen_name":"NiiMante","name":"Nii Mante","friends_count":356,"followers_count":150,"profile_image_url_https":"https://pbs.twimg.com/profile_images/378800000490349094/f4fb4e58182772999b2e7d664329aaf3_normal.jpeg","description":"Engineer. Investor. Traveler","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"greeks-ai.web.app","expanded_url":"http://greeks-ai.web.app","url":"https://t.co/8MsVV3WNrD","indices":[0,23]}]}},"interactions":1},{"created_at":1240916877000,"uid":"36039399","id":"36039399","screen_name":"duborges","name":"Eduardo Borges","friends_count":1678,"followers_count":16505,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1841183953199235073/vu43psbH_normal.jpg","description":"digital entrepreneur since 1997 ≫ saas ≫ mobile apps ≫ chrome extensions ≫ programmatic sites ≫ softwares ≫ chatbots ≫ hacking ≫ AI","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"viralist.ai","expanded_url":"https://viralist.ai","url":"https://t.co/f5WC89NTzI","indices":[0,23]}]}},"interactions":1},{"created_at":1434480708000,"uid":"3330038775","id":"3330038775","screen_name":"joefioti","name":"Joe Fioti","friends_count":414,"followers_count":2175,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1853888417525837824/6XBdEwVs_normal.jpg","description":"it's not possible, it's necessary. building a compiler @luminal_ai (yc s25) to solve inference.","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"luminal.com","expanded_url":"https://luminal.com","url":"https://t.co/bcAyeHGnLm","indices":[0,23]}]}},"interactions":1},{"created_at":1430114943000,"uid":"3177438486","id":"3177438486","screen_name":"junat321","name":"Tanuj Nayak","friends_count":61,"followers_count":7,"profile_image_url_https":"https://abs.twimg.com/sticky/default_profile_images/default_profile_normal.png","description":"","entities":{"description":{"urls":[]}},"interactions":1},{"created_at":1409224518000,"uid":"2776199244","id":"2776199244","screen_name":"_willfalcon","name":"William Falcon ⚡️","friends_count":486,"followers_count":15299,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1843085471850893312/MAWDjJ-4_normal.jpg","description":"CEO @LightningAI. Creator, PyTorch Lightning⚡, Former AI PhD student (pretraining, researcher) @metaAI @CILVRatNYU w @kchonyc @ylecun","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"lightning.ai","expanded_url":"http://lightning.ai","url":"https://t.co/4vitCAUqOd","indices":[0,23]}]}},"interactions":1},{"created_at":1410589785000,"uid":"2768652166","id":"2768652166","screen_name":"georgecurtiss","name":"George","friends_count":251,"followers_count":1356,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1988319344158535682/Pi4T5eC9_normal.jpg","description":"CEO at @helixdb | YC X25 | calisthenics enjoyer 😎 | 🇬🇧 | 6’4” | 23\n\nStar the GH! https://t.co/dadvr63vpZ","entities":{"description":{"urls":[{"display_url":"github.com/helixdb/helix-…","expanded_url":"https://github.com/helixdb/helix-db","url":"https://t.co/dadvr63vpZ","indices":[81,104]}]},"url":{"urls":[{"display_url":"helix-db.com","expanded_url":"http://helix-db.com","url":"https://t.co/TvKUaLhNn9","indices":[0,23]}]}},"interactions":1}],"period":14,"start":1762193255367,"end":1763402855367}}},"settings":{},"session":null,"routeProps":{"/creators/:username":{}}}