Lin Qiao is the cofounder and CEO of Fireworks AI, a developer-first inference platform founder broadcasting product breakthroughs and funding milestones to ~22K followers. Their feed mixes technical wins, roadmap launches, and confident takes that move engineers, investors, and partners. Expect big product drops, benchmark flexes, and ecosystem playbooks.
Lin tweets like every lunch is a press release, your timeline looks less like a CEOās feed and more like a fireworks display where every spark is labeled 'f1', 'FireFunction' or 'Series B'. Calm down, even compilers need a break.
Raising a $52M Series B led by Sequoia and shipping f1, a reasoning system that beat leading closed models on tough benchmarks, cementing Fireworks AI as a serious inference and systems player.
To accelerate the shift to fast, affordable, and composable AI systems, making powerful multi-model, multi-modal capabilities accessible to developers and enterprises so they can build new products without needing large ML teams.
Believes in engineering-first product-market fit, open access to models and tools, structure and composability over bespoke hacks, close partnerships with hardware and infra providers, and that strong benchmarks and developer adoption are the clearest validators of progress.
Product and technical leadership, able to ship high-performance systems (f1, FireFunction), land marquee investors/partners, and translate engineering advantages into clear market narratives that scale developer adoption.
Can come across as relentlessly product-first and competitive, occasionally defensive in replies; that intensity can alienate non-technical audiences and make community-building feel transactional rather than personal.
Grow on X by leaning into educational storytelling and community hooks: 1) Post thread walkthroughs that break down how f1 and FireFunction work, with simple diagrams and code snippets. 2) Share short demo videos and 30, 60s explainer clips highlighting latency/cost comparisons. 3) Run regular AMAs or X Spaces with engineers, partners (NVIDIA/MongoDB), and early customers. 4) Publish reproducible benchmark recipes and invite third-party replication. 5) Create a pinned 'starter' thread for developers with quickstart links, playground access, and use-cases. 6) Surface customer success stories and open-source contributions to broaden appeal beyond VCs. 7) Use concise, multi-tweet threads pre-announcing releases to build anticipation and collect feedback. 8) Engage replies by answering technical questions and retweeting creative developer builds to amplify community. 9) Partner with a couple of influential ML creators for in-depth demos and cross-promotion. 10) Mix humor and humility occasionally, show the team, the bugs, and the rebuilds to humanize the brand.
Fun fact: Lin announced a $52M Series B led by Sequoia and built Fireworks' f1 system that outperformed GPT-4o and Claude 3.5 Sonnet on hard coding, chat and math benchmarks. They have ~22,360 followers, follow 241, and have tweeted 457 times.
Fireworks AI has raised $52M in Series B funding led by @sequoia !
This round propels our mission to enhance our inference platform and lead the shift to compound AI systems. Huge thanks to our investors @nvidia , @AMD , @MongoDB , @benchmark , Sheryl Sandberg , Frank Slootman, @databricks , @alexandr_wang , @howietl , Sajith Wickramasekara and @ClementDelangue from Angel, Series A, and Series B rounds.
Fireworks today offers ultra fast and cost efficient inference engine that tens of thousands of developers use to access the latest models to build creative products, new experiences, and productivity tools.
We will use the new funding to enhance our inference engine and expand our ecosystem to the best and newest hardware from Nvidia and AMD and deeper integration with databases from MongoDB and others. Our platform enhancements include the latest models, advanced customization options, and improved enterprise features. These advancements make it easier and faster for businesses to customize models and build AI applications without needing an army of ML engineers or scientists.
Also, we will accelerate theĀ shift to compound AI systems. On top of our inference platform, we have built the system for decomposing a complex business task to multiple steps accessing multiple models across many modalities (text, audio, image and more), retrievers, and external tools. We will continue to expand these capabilities. This positions Fireworks AI as the go-to solution for developers and enterprises building new disruptive products and experiences faster.
Blog post with details: fireworks.ai/blog/fireworksā¦
I'm very excited to share that @FireworksAI_HQ has raised $250M in Series C funding co-led by @lightspeedvp and @IndexVentures , and participation from @sequoia Capital and @EvanticCapital, bringing up valuation to $4 billion. In total, we raised $327M from prior rounds led by @benchmark and @sequoia, with participation of strategic investors including @nvidia , @AMD , @databricks, @MongoDB , and many angel investors.
When we founded Fireworks in 2022, the vision was simple, but the problem was complex: give builders the speed, cost, and control they need to win. Our mission is to reach Artificial Autonomous Intelligence ā automated product and model co-development to reach maximum quality, speed and cost-efficiency using generative AI. This round propels our execution to expand our current product of tuning and inference platform towards Artificial Autonomous Intelligence.
āļø We have onboarded hundreds of thousands of developers to customize the latest models to create unique applications and impactful user experiences. 10x from Series B.
āļø 10K+ organizations are running on Fireworks. Companies including Notion, Shopify, Uber, GenSpark, Vercel have built and scaled on Fireworks.
āļø We process more than 10 trillion tokens daily. More than 20x from Series B.
We will use the new funding in the following investment:
š Deepen our research in post-training and inference alignment to maximize quality, speed and cost efficiency of computation, including R&D in system research and algorithmic research.
š ļø Expand our product towards a comprehensive tool-chain for the new user experience creation lifecycle, centered around product and model co-design and automation, from model evaluation, reinforcement learning to ultra fast inference engine.
š Grow our computation footprints 3x higher in the next one year, and continue R&D to minimize computational cost and maximize system utilization.
This round is not just evidence that our bet three years ago is driving a big impact, it's concrete proof of the trust and commitment we've built with our customers, partners, and our community.
Join our mission to build Artificial Autonomous Intelligence fireworks.ai/careers
š„ FireAttention v3 -- enabling viable alternatives in the GPU inference serving market š„
Engineers at @FireworksAI_HQ have successfully ported FireAttention to AMD MI300s, resulting in 80% more throughput and 60% faster latency than NIM on Nvidia H100s. With these improvements, FireAttention V3 enables AMD MI300 to become a viable alternative for GPU inference.
AMD chips lack the software optimizations that Nvidia chips benefit from, so we rewrote our attention kernel from scratch. We took advantage of AMD MI300ās higher memory capacity, and accounted for differences in shapes and element swizzling formats. Performance on the MI300 can get better with future firmware updates to power management and software updates for improved matmul performance.
For developers interested in hardware diversity, we are happy to share learnings from writing FireAttention v3. Stay tuned on using FireAttention v3 from our on-demand platform shortly.
For enterprises, this finally solves your supply chain resilience concerns: FireAttention V3 enables broader hardware optionality especially when you use Bring Your Own Cloud deployment - get in touch with us to explore running FireAttention v3 on your hardware!
Blogpost with details: lnkd.in/dujY9R-y
Fireworks AI has raised $52M in Series B funding led by @sequoia !
This round propels our mission to enhance our inference platform and lead the shift to compound AI systems. Huge thanks to our investors @nvidia , @AMD , @MongoDB , @benchmark , Sheryl Sandberg , Frank Slootman, @databricks , @alexandr_wang , @howietl , Sajith Wickramasekara and @ClementDelangue from Angel, Series A, and Series B rounds.
Fireworks today offers ultra fast and cost efficient inference engine that tens of thousands of developers use to access the latest models to build creative products, new experiences, and productivity tools.
We will use the new funding to enhance our inference engine and expand our ecosystem to the best and newest hardware from Nvidia and AMD and deeper integration with databases from MongoDB and others. Our platform enhancements include the latest models, advanced customization options, and improved enterprise features. These advancements make it easier and faster for businesses to customize models and build AI applications without needing an army of ML engineers or scientists.
Also, we will accelerate theĀ shift to compound AI systems. On top of our inference platform, we have built the system for decomposing a complex business task to multiple steps accessing multiple models across many modalities (text, audio, image and more), retrievers, and external tools. We will continue to expand these capabilities. This positions Fireworks AI as the go-to solution for developers and enterprises building new disruptive products and experiences faster.
Blog post with details: fireworks.ai/blog/fireworksā¦
I'm very excited to share that @FireworksAI_HQ has raised $250M in Series C funding co-led by @lightspeedvp and @IndexVentures , and participation from @sequoia Capital and @EvanticCapital, bringing up valuation to $4 billion. In total, we raised $327M from prior rounds led by @benchmark and @sequoia, with participation of strategic investors including @nvidia , @AMD , @databricks, @MongoDB , and many angel investors.
When we founded Fireworks in 2022, the vision was simple, but the problem was complex: give builders the speed, cost, and control they need to win. Our mission is to reach Artificial Autonomous Intelligence ā automated product and model co-development to reach maximum quality, speed and cost-efficiency using generative AI. This round propels our execution to expand our current product of tuning and inference platform towards Artificial Autonomous Intelligence.
āļø We have onboarded hundreds of thousands of developers to customize the latest models to create unique applications and impactful user experiences. 10x from Series B.
āļø 10K+ organizations are running on Fireworks. Companies including Notion, Shopify, Uber, GenSpark, Vercel have built and scaled on Fireworks.
āļø We process more than 10 trillion tokens daily. More than 20x from Series B.
We will use the new funding in the following investment:
š Deepen our research in post-training and inference alignment to maximize quality, speed and cost efficiency of computation, including R&D in system research and algorithmic research.
š ļø Expand our product towards a comprehensive tool-chain for the new user experience creation lifecycle, centered around product and model co-design and automation, from model evaluation, reinforcement learning to ultra fast inference engine.
š Grow our computation footprints 3x higher in the next one year, and continue R&D to minimize computational cost and maximize system utilization.
This round is not just evidence that our bet three years ago is driving a big impact, it's concrete proof of the trust and commitment we've built with our customers, partners, and our community.
Join our mission to build Artificial Autonomous Intelligence fireworks.ai/careers
š„ Weāre excited to announce @FireworksAI_HQ fine-tuning service! Fine-tune and run models like Mixtral at 300 tokens/sec with no extra inference cost!
š Tune and iterate rapidly - Go from dataset to querying a fine-tuned model in minutes covering 100+ models.
š¤ Seamless integration into blazing-fast inference - Your fine-tuned models can deploy serverless on the Fireworks inference platform.
š° Competitive cost - Fireworks charges affordable rates per token of training data, with no additional service fee.
Launch blogpost: fireworks.ai/blog/fine-tuneā¦
You application development velocity matters deeply to us. Can't wait to see what awesome applications you build on top of it!
{"data":{"__meta":{"device":false,"path":"/creators/lqiao"},"/creators/lqiao":{"data":{"user":{"id":"51670467","name":"Lin Qiao","description":"Cofounder and CEO of @FireworksAI_HQ","followers_count":22360,"friends_count":241,"statuses_count":457,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1594101249254731776/-cNhmmTk_normal.jpg","screen_name":"lqiao","location":"Redwood City, CA","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"fireworks.ai","expanded_url":"http://fireworks.ai","indices":[0,23],"url":"https://t.co/yq19EbpJyP"}]}}},"details":{"type":"The Innovator","description":"Lin Qiao is the cofounder and CEO of Fireworks AI, a developer-first inference platform founder broadcasting product breakthroughs and funding milestones to ~22K followers. Their feed mixes technical wins, roadmap launches, and confident takes that move engineers, investors, and partners. Expect big product drops, benchmark flexes, and ecosystem playbooks.","purpose":"To accelerate the shift to fast, affordable, and composable AI systemsāmaking powerful multi-model, multi-modal capabilities accessible to developers and enterprises so they can build new products without needing large ML teams.","beliefs":"Believes in engineering-first product-market fit, open access to models and tools, structure and composability over bespoke hacks, close partnerships with hardware and infra providers, and that strong benchmarks and developer adoption are the clearest validators of progress.","facts":"Fun fact: Lin announced a $52M Series B led by Sequoia and built Fireworks' f1 system that outperformed GPT-4o and Claude 3.5 Sonnet on hard coding, chat and math benchmarks. They have ~22,360 followers, follow 241, and have tweeted 457 times.","strength":"Product and technical leadershipāable to ship high-performance systems (f1, FireFunction), land marquee investors/partners, and translate engineering advantages into clear market narratives that scale developer adoption.","weakness":"Can come across as relentlessly product-first and competitive, occasionally defensive in replies; that intensity can alienate non-technical audiences and make community-building feel transactional rather than personal.","roast":"Lin tweets like every lunch is a press releaseāyour timeline looks less like a CEOās feed and more like a fireworks display where every spark is labeled 'f1', 'FireFunction' or 'Series B'. Calm down, even compilers need a break.","win":"Raising a $52M Series B led by Sequoia and shipping f1āa reasoning system that beat leading closed models on tough benchmarksācementing Fireworks AI as a serious inference and systems player.","recommendation":"Grow on X by leaning into educational storytelling and community hooks: 1) Post thread walkthroughs that break down how f1 and FireFunction work, with simple diagrams and code snippets. 2) Share short demo videos and 30ā60s explainer clips highlighting latency/cost comparisons. 3) Run regular AMAs or X Spaces with engineers, partners (NVIDIA/MongoDB), and early customers. 4) Publish reproducible benchmark recipes and invite third-party replication. 5) Create a pinned 'starter' thread for developers with quickstart links, playground access, and use-cases. 6) Surface customer success stories and open-source contributions to broaden appeal beyond VCs. 7) Use concise, multi-tweet threads pre-announcing releases to build anticipation and collect feedback. 8) Engage replies by answering technical questions and retweeting creative developer builds to amplify community. 9) Partner with a couple of influential ML creators for in-depth demos and cross-promotion. 10) Mix humor and humility occasionallyāshow the team, the bugs, and the rebuilds to humanize the brand."},"tweets":[{"bookmarked":false,"display_text_range":[0,90],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1994060933719601658","view_count":80907,"bookmark_count":69,"created_at":1764256149000,"favorite_count":747,"quote_count":7,"reply_count":61,"retweet_count":26,"user_id_str":"51670467","conversation_id_str":"1994060933719601658","full_text":"Writing off Nvidia and OpenAI now is as dangerous as writing off Google earlier this year.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,271],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fireworks.ai/blog/fireworksā¦","expanded_url":"https://fireworks.ai/blog/fireworks-ai-series-b-compound-ai","url":"https://t.co/9Rx3vgj6YO","indices":[1602,1625]}],"user_mentions":[{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[56,64]},{"id_str":"61559439","name":"NVIDIA","screen_name":"nvidia","indices":[205,212]},{"id_str":"14861876","name":"AMD","screen_name":"AMD","indices":[215,219]},{"id_str":"18080585","name":"MongoDB","screen_name":"MongoDB","indices":[222,230]},{"id_str":"43976252","name":"Benchmark","screen_name":"benchmark","indices":[233,243]},{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[56,64]},{"id_str":"61559439","name":"NVIDIA","screen_name":"nvidia","indices":[205,212]},{"id_str":"14861876","name":"AMD","screen_name":"AMD","indices":[215,219]},{"id_str":"18080585","name":"MongoDB","screen_name":"MongoDB","indices":[222,230]},{"id_str":"43976252","name":"Benchmark","screen_name":"benchmark","indices":[233,243]},{"id_str":"1562518867","name":"Databricks","screen_name":"databricks","indices":[282,293]},{"id_str":"615818451","name":"Alexandr Wang","screen_name":"alexandr_wang","indices":[296,310]},{"id_str":"14957670","name":"Howie Liu","screen_name":"howietl","indices":[313,321]},{"id_str":"186420551","name":"clem š¤","screen_name":"ClementDelangue","indices":[350,366]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1811500361485517153","view_count":135395,"bookmark_count":233,"created_at":1720730316000,"favorite_count":703,"quote_count":10,"reply_count":60,"retweet_count":44,"user_id_str":"51670467","conversation_id_str":"1811500361485517153","full_text":"Fireworks AI has raised $52M in Series B funding led by @sequoia !\n\nThis round propels our mission to enhance our inference platform and lead the shift to compound AI systems. Huge thanks to our investors @nvidia , @AMD , @MongoDB , @benchmark , Sheryl Sandberg , Frank Slootman, @databricks , @alexandr_wang , @howietl , Sajith Wickramasekara and @ClementDelangue from Angel, Series A, and Series B rounds.\n\nFireworks today offers ultra fast and cost efficient inference engine that tens of thousands of developers use to access the latest models to build creative products, new experiences, and productivity tools. \n\nWe will use the new funding to enhance our inference engine and expand our ecosystem to the best and newest hardware from Nvidia and AMD and deeper integration with databases from MongoDB and others. Our platform enhancements include the latest models, advanced customization options, and improved enterprise features. These advancements make it easier and faster for businesses to customize models and build AI applications without needing an army of ML engineers or scientists.\n\nAlso, we will accelerate theĀ shift to compound AI systems. On top of our inference platform, we have built the system for decomposing a complex business task to multiple steps accessing multiple models across many modalities (text, audio, image and more), retrievers, and external tools. We will continue to expand these capabilities. This positions Fireworks AI as the go-to solution for developers and enterprises building new disruptive products and experiences faster. \n\nBlog post with details: https://t.co/9Rx3vgj6YO","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IqW2W0RqKq","expanded_url":"https://x.com/lqiao/status/1858532081518682142/photo/1","id_str":"1858528566553595905","indices":[275,298],"media_key":"3_1858528566553595905","media_url_https":"https://pbs.twimg.com/media/GcrR4mZX0AEZDrD.jpg","type":"photo","url":"https://t.co/IqW2W0RqKq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":1215,"y":425,"h":123,"w":123},{"x":1161,"y":518,"h":115,"w":115},{"x":579,"y":736,"h":123,"w":123},{"x":1213,"y":511,"h":124,"w":124},{"x":352,"y":729,"h":140,"w":140},{"x":788,"y":361,"h":148,"w":148}]},"medium":{"faces":[{"x":901,"y":315,"h":91,"w":91},{"x":861,"y":384,"h":85,"w":85},{"x":429,"y":545,"h":91,"w":91},{"x":899,"y":378,"h":91,"w":91},{"x":261,"y":540,"h":103,"w":103},{"x":584,"y":267,"h":109,"w":109}]},"small":{"faces":[{"x":510,"y":178,"h":51,"w":51},{"x":487,"y":217,"h":48,"w":48},{"x":243,"y":309,"h":51,"w":51},{"x":509,"y":214,"h":52,"w":52},{"x":147,"y":306,"h":58,"w":58},{"x":331,"y":151,"h":62,"w":62}]},"orig":{"faces":[{"x":1215,"y":425,"h":123,"w":123},{"x":1161,"y":518,"h":115,"w":115},{"x":579,"y":736,"h":123,"w":123},{"x":1213,"y":511,"h":124,"w":124},{"x":352,"y":729,"h":140,"w":140},{"x":788,"y":361,"h":148,"w":148}]}},"sizes":{"large":{"h":1018,"w":1618,"resize":"fit"},"medium":{"h":755,"w":1200,"resize":"fit"},"small":{"h":428,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1018,"width":1618,"focus_rects":[{"x":0,"y":112,"w":1618,"h":906},{"x":300,"y":0,"w":1018,"h":1018},{"x":363,"y":0,"w":893,"h":1018},{"x":555,"y":0,"w":509,"h":1018},{"x":0,"y":0,"w":1618,"h":1018}]},"media_results":{"result":{"media_key":"3_1858528566553595905"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"lnkd.in/ep9zzWJ9","expanded_url":"https://lnkd.in/ep9zzWJ9","url":"https://t.co/BFpbnbn1Na","indices":[350,373]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/IqW2W0RqKq","expanded_url":"https://x.com/lqiao/status/1858532081518682142/photo/1","id_str":"1858528566553595905","indices":[275,298],"media_key":"3_1858528566553595905","media_url_https":"https://pbs.twimg.com/media/GcrR4mZX0AEZDrD.jpg","type":"photo","url":"https://t.co/IqW2W0RqKq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":1215,"y":425,"h":123,"w":123},{"x":1161,"y":518,"h":115,"w":115},{"x":579,"y":736,"h":123,"w":123},{"x":1213,"y":511,"h":124,"w":124},{"x":352,"y":729,"h":140,"w":140},{"x":788,"y":361,"h":148,"w":148}]},"medium":{"faces":[{"x":901,"y":315,"h":91,"w":91},{"x":861,"y":384,"h":85,"w":85},{"x":429,"y":545,"h":91,"w":91},{"x":899,"y":378,"h":91,"w":91},{"x":261,"y":540,"h":103,"w":103},{"x":584,"y":267,"h":109,"w":109}]},"small":{"faces":[{"x":510,"y":178,"h":51,"w":51},{"x":487,"y":217,"h":48,"w":48},{"x":243,"y":309,"h":51,"w":51},{"x":509,"y":214,"h":52,"w":52},{"x":147,"y":306,"h":58,"w":58},{"x":331,"y":151,"h":62,"w":62}]},"orig":{"faces":[{"x":1215,"y":425,"h":123,"w":123},{"x":1161,"y":518,"h":115,"w":115},{"x":579,"y":736,"h":123,"w":123},{"x":1213,"y":511,"h":124,"w":124},{"x":352,"y":729,"h":140,"w":140},{"x":788,"y":361,"h":148,"w":148}]}},"sizes":{"large":{"h":1018,"w":1618,"resize":"fit"},"medium":{"h":755,"w":1200,"resize":"fit"},"small":{"h":428,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1018,"width":1618,"focus_rects":[{"x":0,"y":112,"w":1618,"h":906},{"x":300,"y":0,"w":1018,"h":1018},{"x":363,"y":0,"w":893,"h":1018},{"x":555,"y":0,"w":509,"h":1018},{"x":0,"y":0,"w":1618,"h":1018}]},"media_results":{"result":{"media_key":"3_1858528566553595905"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1858532081518682142","view_count":104352,"bookmark_count":361,"created_at":1731943552000,"favorite_count":686,"quote_count":21,"reply_count":32,"retweet_count":105,"user_id_str":"51670467","conversation_id_str":"1858532081518682142","full_text":"š„ Introducing Fireworks f1 š„ \n\nf1 is the first reasoning system over open models to beat GPT-4o and Claude 3.5 Sonnet across hard coding, chat and math benchmarks. \n\nš„ two variants now available in preview: f1 and f1-mini \nš„ access the preview on Fireworks AI Playground for free\nš„ get on to waitlist for free early access to the f1 API \n\nRead more: https://t.co/BFpbnbn1Na","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"display_text_range":[0,176],"lang":"en","fact_check":null,"id":"2035971457768190064","view_count":89914,"bookmark_count":73,"created_at":1774248396000,"favorite_count":591,"quote_count":8,"reply_count":29,"retweet_count":16,"user_id_str":"51670467","conversation_id_str":"2035971457768190064","full_text":"Many people asked me if Cursor and Kimi arrangement is an after-thought to save face. \n\nThe reality is Cursor is in compliance from day 0 using Fireworks. Not an after-thought.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"rapidapi","fetched_at":1774573394235,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":1774508416207,"poll_count":1,"poll_complete":1},{"bookmarked":false,"display_text_range":[0,259],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/qT7g25BjS6","expanded_url":"https://x.com/lqiao/status/1760664322215379153/photo/1","id_str":"1760549320221401088","indices":[260,283],"media_key":"3_1760549320221401088","media_url_https":"https://pbs.twimg.com/media/GG66R2laIAAFS2V.jpg","type":"photo","url":"https://t.co/qT7g25BjS6","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":626,"w":1188,"resize":"fit"},"medium":{"h":626,"w":1188,"resize":"fit"},"small":{"h":358,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":626,"width":1188,"focus_rects":[{"x":0,"y":0,"w":1118,"h":626},{"x":0,"y":0,"w":626,"h":626},{"x":0,"y":0,"w":549,"h":626},{"x":0,"y":0,"w":313,"h":626},{"x":0,"y":0,"w":1188,"h":626}]},"media_results":{"result":{"media_key":"3_1760549320221401088"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fireworks.ai/blog/firefunctā¦","expanded_url":"https://fireworks.ai/blog/firefunction-v1-gpt-4-level-function-calling","url":"https://t.co/TmV3t0hMDl","indices":[260,283]},{"display_url":"fireworks.ai/blog/why-do-alā¦","expanded_url":"https://fireworks.ai/blog/why-do-all-LLMs-need-structured-output-modes","url":"https://t.co/TAAFDqyIvo","indices":[460,483]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/qT7g25BjS6","expanded_url":"https://x.com/lqiao/status/1760664322215379153/photo/1","id_str":"1760549320221401088","indices":[260,283],"media_key":"3_1760549320221401088","media_url_https":"https://pbs.twimg.com/media/GG66R2laIAAFS2V.jpg","type":"photo","url":"https://t.co/qT7g25BjS6","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":626,"w":1188,"resize":"fit"},"medium":{"h":626,"w":1188,"resize":"fit"},"small":{"h":358,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":626,"width":1188,"focus_rects":[{"x":0,"y":0,"w":1118,"h":626},{"x":0,"y":0,"w":626,"h":626},{"x":0,"y":0,"w":549,"h":626},{"x":0,"y":0,"w":313,"h":626},{"x":0,"y":0,"w":1188,"h":626}]},"media_results":{"result":{"media_key":"3_1760549320221401088"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1760664322215379153","view_count":96100,"bookmark_count":512,"created_at":1708610060000,"favorite_count":563,"quote_count":11,"reply_count":15,"retweet_count":105,"user_id_str":"51670467","conversation_id_str":"1760664322215379153","full_text":"š„ Structure is all you need. š„\n\nWeāre excited to announce:\n\n- FireFunction V1 - our new, open-weights function calling model:\n - GPT-4-level structured output and decision-routing\nat 4x lower latency\n - open-weights, commercially usable\n - Blog post: https://t.co/TmV3t0hMDl\n- JSON mode and grammar mode for ALL language models \n - Guarantee that your output adheres to the structure! \n - Less time fiddling on system prompts! \n - Blog post: https://t.co/TAAFDqyIvo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fireworks.ai/careers","expanded_url":"https://fireworks.ai/careers","url":"https://t.co/esv1l4HrCo","indices":[2186,2209]}],"user_mentions":[{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[31,46]},{"id_str":"59910220","name":"Lightspeed","screen_name":"lightspeedvp","indices":[95,108]},{"id_str":"19720395","name":"Index Ventures","screen_name":"IndexVentures","indices":[114,128]},{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[154,162]},{"id_str":"1972000698821656576","name":"Evantic Capital","screen_name":"EvanticCapital","indices":[176,191]},{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[31,46]},{"id_str":"59910220","name":"Lightspeed","screen_name":"lightspeedvp","indices":[95,108]},{"id_str":"19720395","name":"Index Ventures","screen_name":"IndexVentures","indices":[114,128]},{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[154,162]},{"id_str":"1972000698821656576","name":"Evantic Capital","screen_name":"EvanticCapital","indices":[176,191]},{"id_str":"43976252","name":"Benchmark","screen_name":"benchmark","indices":[281,291]},{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[297,305]},{"id_str":"61559439","name":"NVIDIA","screen_name":"nvidia","indices":[359,366]},{"id_str":"14861876","name":"AMD","screen_name":"AMD","indices":[369,373]},{"id_str":"1562518867","name":"Databricks","screen_name":"databricks","indices":[376,387]},{"id_str":"18080585","name":"MongoDB","screen_name":"MongoDB","indices":[389,397]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983193406303281464","view_count":154974,"bookmark_count":121,"created_at":1761665128000,"favorite_count":490,"quote_count":9,"reply_count":46,"retweet_count":38,"user_id_str":"51670467","conversation_id_str":"1983193406303281464","full_text":"I'm very excited to share that @FireworksAI_HQ has raised $250M in Series C funding co-led by @lightspeedvp and @IndexVentures , and participation from @sequoia Capital and @EvanticCapital, bringing up valuation to $4 billion. In total, we raised $327M from prior rounds led by @benchmark and @sequoia, with participation of strategic investors including @nvidia , @AMD , @databricks, @MongoDB , and many angel investors. \n\nWhen we founded Fireworks in 2022, the vision was simple, but the problem was complex: give builders the speed, cost, and control they need to win. Our mission is to reach Artificial Autonomous Intelligence ā automated product and model co-development to reach maximum quality, speed and cost-efficiency using generative AI. This round propels our execution to expand our current product of tuning and inference platform towards Artificial Autonomous Intelligence. \n\nāļø We have onboarded hundreds of thousands of developers to customize the latest models to create unique applications and impactful user experiences. 10x from Series B. \nāļø 10K+ organizations are running on Fireworks. Companies including Notion, Shopify, Uber, GenSpark, Vercel have built and scaled on Fireworks. \nāļø We process more than 10 trillion tokens daily. More than 20x from Series B. \n\nWe will use the new funding in the following investment: \nš Deepen our research in post-training and inference alignment to maximize quality, speed and cost efficiency of computation, including R&D in system research and algorithmic research. \nš ļø Expand our product towards a comprehensive tool-chain for the new user experience creation lifecycle, centered around product and model co-design and automation, from model evaluation, reinforcement learning to ultra fast inference engine. \nš Grow our computation footprints 3x higher in the next one year, and continue R&D to minimize computational cost and maximize system utilization. \n\nThis round is not just evidence that our bet three years ago is driving a big impact, it's concrete proof of the trust and commitment we've built with our customers, partners, and our community. \n\nJoin our mission to build Artificial Autonomous Intelligence https://t.co/esv1l4HrCo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","quoted_status_id_str":"1744837826708218169","quoted_status_permalink":{"url":"https://t.co/tpEL7jaVxN","expanded":"https://twitter.com/FireworksAI_HQ/status/1744837826708218169","display":"x.com/FireworksAI_HQā¦"},"retweeted":false,"fact_check":null,"id":"1744960826782703916","view_count":110439,"bookmark_count":286,"created_at":1704866055000,"favorite_count":444,"quote_count":4,"reply_count":7,"retweet_count":64,"user_id_str":"51670467","conversation_id_str":"1744960826782703916","full_text":"š„ Excited to announce FireAttention: a breakthrough in the speed vs quality tradeoffs of LLMs\n\nWe wrote a custom CUDA kernel optimized for MQA, and FP8/FP16 support on H100. This also lets us run Mixtral at FP8 with negligible impact to quality benchmarks (as opposed to GPTQ,ā¦","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":1,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1994804934173417573","view_count":56378,"bookmark_count":26,"created_at":1764433532000,"favorite_count":408,"quote_count":0,"reply_count":18,"retweet_count":4,"user_id_str":"51670467","conversation_id_str":"1994804934173417573","full_text":"My daughter was at Valley Fair when shooting happened. She escaped through an underground path to a shelter. \n\nThe 20 mins between getting her call for help and picking her up feel like eternity. \n\nWish all the injured victims fully recover.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,269],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/qmdKUjOS2d","expanded_url":"https://x.com/lqiao/status/1748243039766925351/photo/1","id_str":"1748236739263746049","indices":[270,293],"media_key":"3_1748236739263746049","media_url_https":"https://pbs.twimg.com/media/GEL8DUMaUAEt_TO.jpg","type":"photo","url":"https://t.co/qmdKUjOS2d","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":960,"w":986,"resize":"fit"},"medium":{"h":960,"w":986,"resize":"fit"},"small":{"h":662,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":960,"width":986,"focus_rects":[{"x":0,"y":408,"w":986,"h":552},{"x":0,"y":0,"w":960,"h":960},{"x":0,"y":0,"w":842,"h":960},{"x":0,"y":0,"w":480,"h":960},{"x":0,"y":0,"w":986,"h":960}]},"media_results":{"result":{"media_key":"3_1748236739263746049"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"app.fireworks.ai/blog/firellavaā¦","expanded_url":"https://app.fireworks.ai/blog/firellava-the-first-commercially-permissive-oss-llava-model","url":"https://t.co/aBNd8PAF5N","indices":[802,825]}],"user_mentions":[{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[72,87]},{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[72,87]},{"id_str":"2267475408","name":"Haotian Liu","screen_name":"imhaotian","indices":[866,876]},{"id_str":"2745113140","name":"Chunyuan Li","screen_name":"ChunyuanLi","indices":[879,890]},{"id_str":"982116964008058880","name":"Yong Jae Lee","screen_name":"yong_jae_lee","indices":[893,906]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/qmdKUjOS2d","expanded_url":"https://x.com/lqiao/status/1748243039766925351/photo/1","id_str":"1748236739263746049","indices":[270,293],"media_key":"3_1748236739263746049","media_url_https":"https://pbs.twimg.com/media/GEL8DUMaUAEt_TO.jpg","type":"photo","url":"https://t.co/qmdKUjOS2d","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":960,"w":986,"resize":"fit"},"medium":{"h":960,"w":986,"resize":"fit"},"small":{"h":662,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":960,"width":986,"focus_rects":[{"x":0,"y":408,"w":986,"h":552},{"x":0,"y":0,"w":960,"h":960},{"x":0,"y":0,"w":842,"h":960},{"x":0,"y":0,"w":480,"h":960},{"x":0,"y":0,"w":986,"h":960}]},"media_results":{"result":{"media_key":"3_1748236739263746049"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1748243039766925351","view_count":56523,"bookmark_count":317,"created_at":1705648596000,"favorite_count":347,"quote_count":13,"reply_count":10,"retweet_count":61,"user_id_str":"51670467","conversation_id_str":"1748243039766925351","full_text":"š„š„Announcing FireLLaVA -- the first multi-modal LLaVA model, trained by @FireworksAI_HQ , with a commercially permissive license. Itās also our first open source model! \n\nWhile the industry heavily uses text-based foundation models to generate responses, in real-world⦠https://t.co/qmdKUjOS2d","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/G8iqAnjv1z","expanded_url":"https://x.com/lqiao/status/1989003285882368428/photo/1","id_str":"1988998992676679680","indices":[278,301],"media_key":"3_1988998992676679680","media_url_https":"https://pbs.twimg.com/media/G5pYC7EacAA6JD5.jpg","type":"photo","url":"https://t.co/G8iqAnjv1z","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1020,"w":1392,"resize":"fit"},"medium":{"h":879,"w":1200,"resize":"fit"},"small":{"h":498,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1020,"width":1392,"focus_rects":[{"x":0,"y":0,"w":1392,"h":780},{"x":372,"y":0,"w":1020,"h":1020},{"x":492,"y":0,"w":895,"h":1020},{"x":684,"y":0,"w":510,"h":1020},{"x":0,"y":0,"w":1392,"h":1020}]},"media_results":{"result":{"media_key":"3_1988998992676679680"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fireworks.ai/blog/fireworksā¦","expanded_url":"https://fireworks.ai/blog/fireworks-rft","url":"https://t.co/cRhz9IXuMO","indices":[888,911]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/G8iqAnjv1z","expanded_url":"https://x.com/lqiao/status/1989003285882368428/photo/1","id_str":"1988998992676679680","indices":[278,301],"media_key":"3_1988998992676679680","media_url_https":"https://pbs.twimg.com/media/G5pYC7EacAA6JD5.jpg","type":"photo","url":"https://t.co/G8iqAnjv1z","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1020,"w":1392,"resize":"fit"},"medium":{"h":879,"w":1200,"resize":"fit"},"small":{"h":498,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1020,"width":1392,"focus_rects":[{"x":0,"y":0,"w":1392,"h":780},{"x":372,"y":0,"w":1020,"h":1020},{"x":492,"y":0,"w":895,"h":1020},{"x":684,"y":0,"w":510,"h":1020},{"x":0,"y":0,"w":1392,"h":1020}]},"media_results":{"result":{"media_key":"3_1988998992676679680"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989003285882368428","view_count":321933,"bookmark_count":176,"created_at":1763050311000,"favorite_count":345,"quote_count":20,"reply_count":22,"retweet_count":50,"user_id_str":"51670467","conversation_id_str":"1989003285882368428","full_text":"š Fireworks Reinforcement Fine-Tuning (RFT) launched! \n\nAfter many months of iteration with real world use cases, we are excited to launch Fireworks RFT public preview. Itās a managed RL service that turns open frontier models (e.g. DeepSeek V3, Kimi K2) into custom agents for tools, code & reasoning.\n\nOur design principle is to bring RL to integrate directly with your agents in production, instead of you simulating or redesigning your agents to do RL. Fireworks RFT trains on full agent trajectories (multi-step, tools, retries), so the model learns your workflow in production. \n\nHere are real-world results:\n\nš Genspark Deep research agent ā +10% quality vs SOTA closed model, +33% tool calls, ~50% lower cost\nš Vercel V0 coding agent ā +33% better than closed models, 93ā94% error-free, 40Ć faster.\n\nš„ Fireworks RFT public preview is open now - start training today for free š\n https://t.co/cRhz9IXuMO","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/htqxfoczf1","expanded_url":"https://twitter.com/lqiao/status/1803063570596254171/photo/1","id_str":"1803059984017637376","indices":[269,292],"media_key":"3_1803059984017637376","media_url_https":"https://pbs.twimg.com/media/GQXBgKAXwAAmz-1.jpg","type":"photo","url":"https://t.co/htqXFoCzf1","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":622,"y":644,"h":97,"w":97},{"x":564,"y":112,"h":215,"w":215}]},"medium":{"faces":[{"x":500,"y":518,"h":78,"w":78},{"x":454,"y":90,"h":173,"w":173}]},"small":{"faces":[{"x":283,"y":293,"h":44,"w":44},{"x":257,"y":51,"h":98,"w":98}]},"orig":{"faces":[{"x":622,"y":644,"h":97,"w":97},{"x":564,"y":112,"h":215,"w":215}]}},"sizes":{"large":{"h":1148,"w":1490,"resize":"fit"},"medium":{"h":925,"w":1200,"resize":"fit"},"small":{"h":524,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1148,"width":1490,"focus_rects":[{"x":0,"y":0,"w":1490,"h":834},{"x":58,"y":0,"w":1148,"h":1148},{"x":129,"y":0,"w":1007,"h":1148},{"x":345,"y":0,"w":574,"h":1148},{"x":0,"y":0,"w":1490,"h":1148}]},"media_results":{"result":{"media_key":"3_1803059984017637376"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/htqxfoczf1","expanded_url":"https://twitter.com/lqiao/status/1803063570596254171/photo/1","id_str":"1803059984017637376","indices":[269,292],"media_key":"3_1803059984017637376","media_url_https":"https://pbs.twimg.com/media/GQXBgKAXwAAmz-1.jpg","type":"photo","url":"https://t.co/htqXFoCzf1","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":622,"y":644,"h":97,"w":97},{"x":564,"y":112,"h":215,"w":215}]},"medium":{"faces":[{"x":500,"y":518,"h":78,"w":78},{"x":454,"y":90,"h":173,"w":173}]},"small":{"faces":[{"x":283,"y":293,"h":44,"w":44},{"x":257,"y":51,"h":98,"w":98}]},"orig":{"faces":[{"x":622,"y":644,"h":97,"w":97},{"x":564,"y":112,"h":215,"w":215}]}},"sizes":{"large":{"h":1148,"w":1490,"resize":"fit"},"medium":{"h":925,"w":1200,"resize":"fit"},"small":{"h":524,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1148,"width":1490,"focus_rects":[{"x":0,"y":0,"w":1490,"h":834},{"x":58,"y":0,"w":1148,"h":1148},{"x":129,"y":0,"w":1007,"h":1148},{"x":345,"y":0,"w":574,"h":1148},{"x":0,"y":0,"w":1490,"h":1148}]},"media_results":{"result":{"media_key":"3_1803059984017637376"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1803063570596254171","view_count":50083,"bookmark_count":251,"created_at":1718718829000,"favorite_count":341,"quote_count":7,"reply_count":17,"retweet_count":73,"user_id_str":"51670467","conversation_id_str":"1803063570596254171","full_text":"š„ Firefunction-v2, new open-weights function-calling modelš„\n\nI'm super excited to announce Firefunction-v2, our latest open-weights! \n\n- Competitive with GPT-4o at function-calling\n- 1/10 of GPT-4o cost and 2x the speed\n- Retains both conversation and function-calling capabilities\n- Available on both Huggingface and Fireworks API","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,198],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/OBa5kHqvk1","expanded_url":"https://x.com/lqiao/status/1967641702484807695/photo/1","id_str":"1967640492264611840","indices":[199,222],"media_key":"3_1967640492264611840","media_url_https":"https://pbs.twimg.com/media/G052mksbYAAsNIM.jpg","type":"photo","url":"https://t.co/OBa5kHqvk1","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":738,"w":1448,"resize":"fit"},"medium":{"h":612,"w":1200,"resize":"fit"},"small":{"h":347,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":738,"width":1448,"focus_rects":[{"x":0,"y":0,"w":1318,"h":738},{"x":0,"y":0,"w":738,"h":738},{"x":0,"y":0,"w":647,"h":738},{"x":0,"y":0,"w":369,"h":738},{"x":0,"y":0,"w":1448,"h":738}]},"media_results":{"result":{"media_key":"3_1967640492264611840"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/OBa5kHqvk1","expanded_url":"https://x.com/lqiao/status/1967641702484807695/photo/1","id_str":"1967640492264611840","indices":[199,222],"media_key":"3_1967640492264611840","media_url_https":"https://pbs.twimg.com/media/G052mksbYAAsNIM.jpg","type":"photo","url":"https://t.co/OBa5kHqvk1","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":738,"w":1448,"resize":"fit"},"medium":{"h":612,"w":1200,"resize":"fit"},"small":{"h":347,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":738,"width":1448,"focus_rects":[{"x":0,"y":0,"w":1318,"h":738},{"x":0,"y":0,"w":738,"h":738},{"x":0,"y":0,"w":647,"h":738},{"x":0,"y":0,"w":369,"h":738},{"x":0,"y":0,"w":1448,"h":738}]},"media_results":{"result":{"media_key":"3_1967640492264611840"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1967641702484807695","view_count":35674,"bookmark_count":101,"created_at":1757957313000,"favorite_count":297,"quote_count":0,"reply_count":13,"retweet_count":26,"user_id_str":"51670467","conversation_id_str":"1967641702484807695","full_text":"Fireworks passed ASIC speed! \n\nFirst time, GPU based inference crossed an ASIC provider. \n\nBenchmark credit to AA\nModel: GPT-OSS-120B\nSpeed: 540 TPS\nLegend: Purple - Fireworks on B200; Orange - Groq https://t.co/OBa5kHqvk1","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1575886662957047812","indices":[101,116],"name":"Fireworks AI","screen_name":"FireworksAI_HQ"},{"id_str":"1695890961094909952","indices":[190,200],"name":"Cursor","screen_name":"cursor_ai"}]},"display_text_range":[0,273],"lang":"en","fact_check":null,"id":"2034775389390921979","view_count":63622,"bookmark_count":47,"created_at":1773963231000,"favorite_count":245,"quote_count":4,"reply_count":24,"retweet_count":29,"user_id_str":"51670467","conversation_id_str":"2034775389390921979","full_text":"š„ Cursor Composer2 launched on Fireworks š„\n\nThis time it's not just inference but also RL powered by @FireworksAI_HQ. So much hard work and sleepless nights to get this gift out. \n\nCongrats @cursor_ai team on launching this SOTA model beating Opus 4.6 on terminal bench! š\n\nhttps://t.co/7NjXcABJpg","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"rapidapi","fetched_at":1774573394235,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":1774224012524,"poll_count":1,"poll_complete":1},{"bookmarked":false,"display_text_range":[0,267],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/asyPuLwXN9","expanded_url":"https://x.com/lqiao/status/1846299836976713924/photo/1","id_str":"1846298581587972096","indices":[268,291],"media_key":"3_1846298581587972096","media_url_https":"https://pbs.twimg.com/media/GZ9exyDb0AA5n9Y.jpg","type":"photo","url":"https://t.co/asyPuLwXN9","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":882,"w":1862,"resize":"fit"},"medium":{"h":568,"w":1200,"resize":"fit"},"small":{"h":322,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":882,"width":1862,"focus_rects":[{"x":0,"y":0,"w":1575,"h":882},{"x":0,"y":0,"w":882,"h":882},{"x":0,"y":0,"w":774,"h":882},{"x":12,"y":0,"w":441,"h":882},{"x":0,"y":0,"w":1862,"h":882}]},"media_results":{"result":{"media_key":"3_1846298581587972096"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"lnkd.in/dujY9R-y","expanded_url":"https://lnkd.in/dujY9R-y","url":"https://t.co/tdy5CCI7pC","indices":[1238,1261]}],"user_mentions":[{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[104,119]},{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[104,119]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/asyPuLwXN9","expanded_url":"https://x.com/lqiao/status/1846299836976713924/photo/1","id_str":"1846298581587972096","indices":[268,291],"media_key":"3_1846298581587972096","media_url_https":"https://pbs.twimg.com/media/GZ9exyDb0AA5n9Y.jpg","type":"photo","url":"https://t.co/asyPuLwXN9","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":882,"w":1862,"resize":"fit"},"medium":{"h":568,"w":1200,"resize":"fit"},"small":{"h":322,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":882,"width":1862,"focus_rects":[{"x":0,"y":0,"w":1575,"h":882},{"x":0,"y":0,"w":882,"h":882},{"x":0,"y":0,"w":774,"h":882},{"x":12,"y":0,"w":441,"h":882},{"x":0,"y":0,"w":1862,"h":882}]},"media_results":{"result":{"media_key":"3_1846298581587972096"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1846299836976713924","view_count":46493,"bookmark_count":75,"created_at":1729027158000,"favorite_count":237,"quote_count":5,"reply_count":7,"retweet_count":45,"user_id_str":"51670467","conversation_id_str":"1846299836976713924","full_text":"š„ FireAttention v3 -- enabling viable alternatives in the GPU inference serving market š„ \n\nEngineers at @FireworksAI_HQ have successfully ported FireAttention to AMD MI300s, resulting in 80% more throughput and 60% faster latency than NIM on Nvidia H100s. With these improvements, FireAttention V3 enables AMD MI300 to become a viable alternative for GPU inference.\n\nAMD chips lack the software optimizations that Nvidia chips benefit from, so we rewrote our attention kernel from scratch. We took advantage of AMD MI300ās higher memory capacity, and accounted for differences in shapes and element swizzling formats. Performance on the MI300 can get better with future firmware updates to power management and software updates for improved matmul performance.\n\nFor developers interested in hardware diversity, we are happy to share learnings from writing FireAttention v3. Stay tuned on using FireAttention v3 from our on-demand platform shortly. \n\nFor enterprises, this finally solves your supply chain resilience concerns: FireAttention V3 enables broader hardware optionality especially when you use Bring Your Own Cloud deployment - get in touch with us to explore running FireAttention v3 on your hardware!\n\nBlogpost with details: https://t.co/tdy5CCI7pC","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"display_text_range":[0,278],"lang":"en","fact_check":null,"id":"2005368100510990364","view_count":29699,"bookmark_count":77,"created_at":1766951987000,"favorite_count":214,"quote_count":1,"reply_count":18,"retweet_count":14,"user_id_str":"51670467","conversation_id_str":"2005368100510990364","full_text":"I'm a bit surprised Nvidia didn't reveal a secret stash of ASICs, but acquiring Groq is strategic to have the TPU early member join the battle against TPU. \n\nWe had the idea of a disaggregated setup with GPUs (for prefill) and ASIC w. SRAM (for generation) 2 years ago. It makes sense for hardware vendors to execute on this. Getting the proportion right is hard but most important.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"rapidapi","fetched_at":1770611957128,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/9GXFEWuTjz","expanded_url":"https://x.com/lqiao/status/1992309156859228413/photo/1","id_str":"1992307163314978816","indices":[275,298],"media_key":"3_1992307163314978816","media_url_https":"https://pbs.twimg.com/media/G6YYzzDbAAAw-bc.jpg","type":"photo","url":"https://t.co/9GXFEWuTjz","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":684,"w":1094,"resize":"fit"},"medium":{"h":684,"w":1094,"resize":"fit"},"small":{"h":425,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":684,"width":1094,"focus_rects":[{"x":0,"y":71,"w":1094,"h":613},{"x":0,"y":0,"w":684,"h":684},{"x":0,"y":0,"w":600,"h":684},{"x":0,"y":0,"w":342,"h":684},{"x":0,"y":0,"w":1094,"h":684}]},"media_results":{"result":{"media_key":"3_1992307163314978816"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[605,617]},{"id_str":"1975088327431999488","name":"rLLM","screen_name":"rllm_project","indices":[620,633]},{"id_str":"776585502606721024","name":"PyTorch","screen_name":"PyTorch","indices":[645,653]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[706,713]},{"id_str":"1891523745929732096","name":"Thinking Machines","screen_name":"thinkymachines","indices":[723,738]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/9GXFEWuTjz","expanded_url":"https://x.com/lqiao/status/1992309156859228413/photo/1","id_str":"1992307163314978816","indices":[275,298],"media_key":"3_1992307163314978816","media_url_https":"https://pbs.twimg.com/media/G6YYzzDbAAAw-bc.jpg","type":"photo","url":"https://t.co/9GXFEWuTjz","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":684,"w":1094,"resize":"fit"},"medium":{"h":684,"w":1094,"resize":"fit"},"small":{"h":425,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":684,"width":1094,"focus_rects":[{"x":0,"y":71,"w":1094,"h":613},{"x":0,"y":0,"w":684,"h":684},{"x":0,"y":0,"w":600,"h":684},{"x":0,"y":0,"w":342,"h":684},{"x":0,"y":0,"w":1094,"h":684}]},"media_results":{"result":{"media_key":"3_1992307163314978816"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1992309156859228413","view_count":24665,"bookmark_count":203,"created_at":1763838492000,"favorite_count":208,"quote_count":3,"reply_count":9,"retweet_count":39,"user_id_str":"51670467","conversation_id_str":"1992309156859228413","full_text":"š Eval Protocol is Open Sourced!\n\nReinforcement fine-tuning is complicated, because there are hundreds of environments and tens of trainers you can pick and choose and integrate with. Even worse, in production, agents donāt live in clean āgyms.ā They operate in messy, async environments - flaky APIs, partial observability, conflicting objectives, long feedback loops. \n\nWe solve that problem by open sourcing Eval Protocol. The goal is for you to build your production RFT flow without reinventing the wheel of managing such complex integration. \nš Day 0 support for trainers and environments like TRL (@huggingface), @rllm_project , OpenEnv (@PyTorch),Ā as well as support for proprietary trainers like @OpenAI RFT and @thinkymachines Tinker. More to come.\nš Instrument agents in production instead of toy or simulated environments\nš Move from offline benchmarks to live, continuous improvement","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0}],"ctweets":[{"bookmarked":false,"display_text_range":[0,90],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1994060933719601658","view_count":80907,"bookmark_count":69,"created_at":1764256149000,"favorite_count":747,"quote_count":7,"reply_count":61,"retweet_count":26,"user_id_str":"51670467","conversation_id_str":"1994060933719601658","full_text":"Writing off Nvidia and OpenAI now is as dangerous as writing off Google earlier this year.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,271],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fireworks.ai/blog/fireworksā¦","expanded_url":"https://fireworks.ai/blog/fireworks-ai-series-b-compound-ai","url":"https://t.co/9Rx3vgj6YO","indices":[1602,1625]}],"user_mentions":[{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[56,64]},{"id_str":"61559439","name":"NVIDIA","screen_name":"nvidia","indices":[205,212]},{"id_str":"14861876","name":"AMD","screen_name":"AMD","indices":[215,219]},{"id_str":"18080585","name":"MongoDB","screen_name":"MongoDB","indices":[222,230]},{"id_str":"43976252","name":"Benchmark","screen_name":"benchmark","indices":[233,243]},{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[56,64]},{"id_str":"61559439","name":"NVIDIA","screen_name":"nvidia","indices":[205,212]},{"id_str":"14861876","name":"AMD","screen_name":"AMD","indices":[215,219]},{"id_str":"18080585","name":"MongoDB","screen_name":"MongoDB","indices":[222,230]},{"id_str":"43976252","name":"Benchmark","screen_name":"benchmark","indices":[233,243]},{"id_str":"1562518867","name":"Databricks","screen_name":"databricks","indices":[282,293]},{"id_str":"615818451","name":"Alexandr Wang","screen_name":"alexandr_wang","indices":[296,310]},{"id_str":"14957670","name":"Howie Liu","screen_name":"howietl","indices":[313,321]},{"id_str":"186420551","name":"clem š¤","screen_name":"ClementDelangue","indices":[350,366]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1811500361485517153","view_count":135395,"bookmark_count":233,"created_at":1720730316000,"favorite_count":703,"quote_count":10,"reply_count":60,"retweet_count":44,"user_id_str":"51670467","conversation_id_str":"1811500361485517153","full_text":"Fireworks AI has raised $52M in Series B funding led by @sequoia !\n\nThis round propels our mission to enhance our inference platform and lead the shift to compound AI systems. Huge thanks to our investors @nvidia , @AMD , @MongoDB , @benchmark , Sheryl Sandberg , Frank Slootman, @databricks , @alexandr_wang , @howietl , Sajith Wickramasekara and @ClementDelangue from Angel, Series A, and Series B rounds.\n\nFireworks today offers ultra fast and cost efficient inference engine that tens of thousands of developers use to access the latest models to build creative products, new experiences, and productivity tools. \n\nWe will use the new funding to enhance our inference engine and expand our ecosystem to the best and newest hardware from Nvidia and AMD and deeper integration with databases from MongoDB and others. Our platform enhancements include the latest models, advanced customization options, and improved enterprise features. These advancements make it easier and faster for businesses to customize models and build AI applications without needing an army of ML engineers or scientists.\n\nAlso, we will accelerate theĀ shift to compound AI systems. On top of our inference platform, we have built the system for decomposing a complex business task to multiple steps accessing multiple models across many modalities (text, audio, image and more), retrievers, and external tools. We will continue to expand these capabilities. This positions Fireworks AI as the go-to solution for developers and enterprises building new disruptive products and experiences faster. \n\nBlog post with details: https://t.co/9Rx3vgj6YO","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,280],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[{"display_url":"fireworks.ai/careers","expanded_url":"https://fireworks.ai/careers","url":"https://t.co/esv1l4HrCo","indices":[2186,2209]}],"user_mentions":[{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[31,46]},{"id_str":"59910220","name":"Lightspeed","screen_name":"lightspeedvp","indices":[95,108]},{"id_str":"19720395","name":"Index Ventures","screen_name":"IndexVentures","indices":[114,128]},{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[154,162]},{"id_str":"1972000698821656576","name":"Evantic Capital","screen_name":"EvanticCapital","indices":[176,191]},{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[31,46]},{"id_str":"59910220","name":"Lightspeed","screen_name":"lightspeedvp","indices":[95,108]},{"id_str":"19720395","name":"Index Ventures","screen_name":"IndexVentures","indices":[114,128]},{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[154,162]},{"id_str":"1972000698821656576","name":"Evantic Capital","screen_name":"EvanticCapital","indices":[176,191]},{"id_str":"43976252","name":"Benchmark","screen_name":"benchmark","indices":[281,291]},{"id_str":"24775410","name":"Sequoia Capital","screen_name":"sequoia","indices":[297,305]},{"id_str":"61559439","name":"NVIDIA","screen_name":"nvidia","indices":[359,366]},{"id_str":"14861876","name":"AMD","screen_name":"AMD","indices":[369,373]},{"id_str":"1562518867","name":"Databricks","screen_name":"databricks","indices":[376,387]},{"id_str":"18080585","name":"MongoDB","screen_name":"MongoDB","indices":[389,397]}]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1983193406303281464","view_count":154974,"bookmark_count":121,"created_at":1761665128000,"favorite_count":490,"quote_count":9,"reply_count":46,"retweet_count":38,"user_id_str":"51670467","conversation_id_str":"1983193406303281464","full_text":"I'm very excited to share that @FireworksAI_HQ has raised $250M in Series C funding co-led by @lightspeedvp and @IndexVentures , and participation from @sequoia Capital and @EvanticCapital, bringing up valuation to $4 billion. In total, we raised $327M from prior rounds led by @benchmark and @sequoia, with participation of strategic investors including @nvidia , @AMD , @databricks, @MongoDB , and many angel investors. \n\nWhen we founded Fireworks in 2022, the vision was simple, but the problem was complex: give builders the speed, cost, and control they need to win. Our mission is to reach Artificial Autonomous Intelligence ā automated product and model co-development to reach maximum quality, speed and cost-efficiency using generative AI. This round propels our execution to expand our current product of tuning and inference platform towards Artificial Autonomous Intelligence. \n\nāļø We have onboarded hundreds of thousands of developers to customize the latest models to create unique applications and impactful user experiences. 10x from Series B. \nāļø 10K+ organizations are running on Fireworks. Companies including Notion, Shopify, Uber, GenSpark, Vercel have built and scaled on Fireworks. \nāļø We process more than 10 trillion tokens daily. More than 20x from Series B. \n\nWe will use the new funding in the following investment: \nš Deepen our research in post-training and inference alignment to maximize quality, speed and cost efficiency of computation, including R&D in system research and algorithmic research. \nš ļø Expand our product towards a comprehensive tool-chain for the new user experience creation lifecycle, centered around product and model co-design and automation, from model evaluation, reinforcement learning to ultra fast inference engine. \nš Grow our computation footprints 3x higher in the next one year, and continue R&D to minimize computational cost and maximize system utilization. \n\nThis round is not just evidence that our bet three years ago is driving a big impact, it's concrete proof of the trust and commitment we've built with our customers, partners, and our community. \n\nJoin our mission to build Artificial Autonomous Intelligence https://t.co/esv1l4HrCo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/IqW2W0RqKq","expanded_url":"https://x.com/lqiao/status/1858532081518682142/photo/1","id_str":"1858528566553595905","indices":[275,298],"media_key":"3_1858528566553595905","media_url_https":"https://pbs.twimg.com/media/GcrR4mZX0AEZDrD.jpg","type":"photo","url":"https://t.co/IqW2W0RqKq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":1215,"y":425,"h":123,"w":123},{"x":1161,"y":518,"h":115,"w":115},{"x":579,"y":736,"h":123,"w":123},{"x":1213,"y":511,"h":124,"w":124},{"x":352,"y":729,"h":140,"w":140},{"x":788,"y":361,"h":148,"w":148}]},"medium":{"faces":[{"x":901,"y":315,"h":91,"w":91},{"x":861,"y":384,"h":85,"w":85},{"x":429,"y":545,"h":91,"w":91},{"x":899,"y":378,"h":91,"w":91},{"x":261,"y":540,"h":103,"w":103},{"x":584,"y":267,"h":109,"w":109}]},"small":{"faces":[{"x":510,"y":178,"h":51,"w":51},{"x":487,"y":217,"h":48,"w":48},{"x":243,"y":309,"h":51,"w":51},{"x":509,"y":214,"h":52,"w":52},{"x":147,"y":306,"h":58,"w":58},{"x":331,"y":151,"h":62,"w":62}]},"orig":{"faces":[{"x":1215,"y":425,"h":123,"w":123},{"x":1161,"y":518,"h":115,"w":115},{"x":579,"y":736,"h":123,"w":123},{"x":1213,"y":511,"h":124,"w":124},{"x":352,"y":729,"h":140,"w":140},{"x":788,"y":361,"h":148,"w":148}]}},"sizes":{"large":{"h":1018,"w":1618,"resize":"fit"},"medium":{"h":755,"w":1200,"resize":"fit"},"small":{"h":428,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1018,"width":1618,"focus_rects":[{"x":0,"y":112,"w":1618,"h":906},{"x":300,"y":0,"w":1018,"h":1018},{"x":363,"y":0,"w":893,"h":1018},{"x":555,"y":0,"w":509,"h":1018},{"x":0,"y":0,"w":1618,"h":1018}]},"media_results":{"result":{"media_key":"3_1858528566553595905"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"lnkd.in/ep9zzWJ9","expanded_url":"https://lnkd.in/ep9zzWJ9","url":"https://t.co/BFpbnbn1Na","indices":[350,373]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/IqW2W0RqKq","expanded_url":"https://x.com/lqiao/status/1858532081518682142/photo/1","id_str":"1858528566553595905","indices":[275,298],"media_key":"3_1858528566553595905","media_url_https":"https://pbs.twimg.com/media/GcrR4mZX0AEZDrD.jpg","type":"photo","url":"https://t.co/IqW2W0RqKq","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":1215,"y":425,"h":123,"w":123},{"x":1161,"y":518,"h":115,"w":115},{"x":579,"y":736,"h":123,"w":123},{"x":1213,"y":511,"h":124,"w":124},{"x":352,"y":729,"h":140,"w":140},{"x":788,"y":361,"h":148,"w":148}]},"medium":{"faces":[{"x":901,"y":315,"h":91,"w":91},{"x":861,"y":384,"h":85,"w":85},{"x":429,"y":545,"h":91,"w":91},{"x":899,"y":378,"h":91,"w":91},{"x":261,"y":540,"h":103,"w":103},{"x":584,"y":267,"h":109,"w":109}]},"small":{"faces":[{"x":510,"y":178,"h":51,"w":51},{"x":487,"y":217,"h":48,"w":48},{"x":243,"y":309,"h":51,"w":51},{"x":509,"y":214,"h":52,"w":52},{"x":147,"y":306,"h":58,"w":58},{"x":331,"y":151,"h":62,"w":62}]},"orig":{"faces":[{"x":1215,"y":425,"h":123,"w":123},{"x":1161,"y":518,"h":115,"w":115},{"x":579,"y":736,"h":123,"w":123},{"x":1213,"y":511,"h":124,"w":124},{"x":352,"y":729,"h":140,"w":140},{"x":788,"y":361,"h":148,"w":148}]}},"sizes":{"large":{"h":1018,"w":1618,"resize":"fit"},"medium":{"h":755,"w":1200,"resize":"fit"},"small":{"h":428,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1018,"width":1618,"focus_rects":[{"x":0,"y":112,"w":1618,"h":906},{"x":300,"y":0,"w":1018,"h":1018},{"x":363,"y":0,"w":893,"h":1018},{"x":555,"y":0,"w":509,"h":1018},{"x":0,"y":0,"w":1618,"h":1018}]},"media_results":{"result":{"media_key":"3_1858528566553595905"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1858532081518682142","view_count":104352,"bookmark_count":361,"created_at":1731943552000,"favorite_count":686,"quote_count":21,"reply_count":32,"retweet_count":105,"user_id_str":"51670467","conversation_id_str":"1858532081518682142","full_text":"š„ Introducing Fireworks f1 š„ \n\nf1 is the first reasoning system over open models to beat GPT-4o and Claude 3.5 Sonnet across hard coding, chat and math benchmarks. \n\nš„ two variants now available in preview: f1 and f1-mini \nš„ access the preview on Fireworks AI Playground for free\nš„ get on to waitlist for free early access to the f1 API \n\nRead more: https://t.co/BFpbnbn1Na","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"display_text_range":[0,176],"lang":"en","fact_check":null,"id":"2035971457768190064","view_count":89914,"bookmark_count":73,"created_at":1774248396000,"favorite_count":591,"quote_count":8,"reply_count":29,"retweet_count":16,"user_id_str":"51670467","conversation_id_str":"2035971457768190064","full_text":"Many people asked me if Cursor and Kimi arrangement is an after-thought to save face. \n\nThe reality is Cursor is in compliance from day 0 using Fireworks. Not an after-thought.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"rapidapi","fetched_at":1774573394235,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":1774508416207,"poll_count":1,"poll_complete":1},{"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1575886662957047812","indices":[101,116],"name":"Fireworks AI","screen_name":"FireworksAI_HQ"},{"id_str":"1695890961094909952","indices":[190,200],"name":"Cursor","screen_name":"cursor_ai"}]},"display_text_range":[0,273],"lang":"en","fact_check":null,"id":"2034775389390921979","view_count":63622,"bookmark_count":47,"created_at":1773963231000,"favorite_count":245,"quote_count":4,"reply_count":24,"retweet_count":29,"user_id_str":"51670467","conversation_id_str":"2034775389390921979","full_text":"š„ Cursor Composer2 launched on Fireworks š„\n\nThis time it's not just inference but also RL powered by @FireworksAI_HQ. So much hard work and sleepless nights to get this gift out. \n\nCongrats @cursor_ai team on launching this SOTA model beating Opus 4.6 on terminal bench! š\n\nhttps://t.co/7NjXcABJpg","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"rapidapi","fetched_at":1774573394235,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":1774224012524,"poll_count":1,"poll_complete":1},{"bookmarked":false,"display_text_range":[0,277],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/G8iqAnjv1z","expanded_url":"https://x.com/lqiao/status/1989003285882368428/photo/1","id_str":"1988998992676679680","indices":[278,301],"media_key":"3_1988998992676679680","media_url_https":"https://pbs.twimg.com/media/G5pYC7EacAA6JD5.jpg","type":"photo","url":"https://t.co/G8iqAnjv1z","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1020,"w":1392,"resize":"fit"},"medium":{"h":879,"w":1200,"resize":"fit"},"small":{"h":498,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1020,"width":1392,"focus_rects":[{"x":0,"y":0,"w":1392,"h":780},{"x":372,"y":0,"w":1020,"h":1020},{"x":492,"y":0,"w":895,"h":1020},{"x":684,"y":0,"w":510,"h":1020},{"x":0,"y":0,"w":1392,"h":1020}]},"media_results":{"result":{"media_key":"3_1988998992676679680"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fireworks.ai/blog/fireworksā¦","expanded_url":"https://fireworks.ai/blog/fireworks-rft","url":"https://t.co/cRhz9IXuMO","indices":[888,911]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/G8iqAnjv1z","expanded_url":"https://x.com/lqiao/status/1989003285882368428/photo/1","id_str":"1988998992676679680","indices":[278,301],"media_key":"3_1988998992676679680","media_url_https":"https://pbs.twimg.com/media/G5pYC7EacAA6JD5.jpg","type":"photo","url":"https://t.co/G8iqAnjv1z","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1020,"w":1392,"resize":"fit"},"medium":{"h":879,"w":1200,"resize":"fit"},"small":{"h":498,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1020,"width":1392,"focus_rects":[{"x":0,"y":0,"w":1392,"h":780},{"x":372,"y":0,"w":1020,"h":1020},{"x":492,"y":0,"w":895,"h":1020},{"x":684,"y":0,"w":510,"h":1020},{"x":0,"y":0,"w":1392,"h":1020}]},"media_results":{"result":{"media_key":"3_1988998992676679680"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1989003285882368428","view_count":321933,"bookmark_count":176,"created_at":1763050311000,"favorite_count":345,"quote_count":20,"reply_count":22,"retweet_count":50,"user_id_str":"51670467","conversation_id_str":"1989003285882368428","full_text":"š Fireworks Reinforcement Fine-Tuning (RFT) launched! \n\nAfter many months of iteration with real world use cases, we are excited to launch Fireworks RFT public preview. Itās a managed RL service that turns open frontier models (e.g. DeepSeek V3, Kimi K2) into custom agents for tools, code & reasoning.\n\nOur design principle is to bring RL to integrate directly with your agents in production, instead of you simulating or redesigning your agents to do RL. Fireworks RFT trains on full agent trajectories (multi-step, tools, retries), so the model learns your workflow in production. \n\nHere are real-world results:\n\nš Genspark Deep research agent ā +10% quality vs SOTA closed model, +33% tool calls, ~50% lower cost\nš Vercel V0 coding agent ā +33% better than closed models, 93ā94% error-free, 40Ć faster.\n\nš„ Fireworks RFT public preview is open now - start training today for free š\n https://t.co/cRhz9IXuMO","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,241],"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"favorited":false,"lang":"en","retweeted":false,"fact_check":null,"id":"1994804934173417573","view_count":56378,"bookmark_count":26,"created_at":1764433532000,"favorite_count":408,"quote_count":0,"reply_count":18,"retweet_count":4,"user_id_str":"51670467","conversation_id_str":"1994804934173417573","full_text":"My daughter was at Valley Fair when shooting happened. She escaped through an underground path to a shelter. \n\nThe 20 mins between getting her call for help and picking her up feel like eternity. \n\nWish all the injured victims fully recover.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"entities":{"hashtags":[],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"display_text_range":[0,278],"lang":"en","fact_check":null,"id":"2005368100510990364","view_count":29699,"bookmark_count":77,"created_at":1766951987000,"favorite_count":214,"quote_count":1,"reply_count":18,"retweet_count":14,"user_id_str":"51670467","conversation_id_str":"2005368100510990364","full_text":"I'm a bit surprised Nvidia didn't reveal a secret stash of ASICs, but acquiring Groq is strategic to have the TPU early member join the battle against TPU. \n\nWe had the idea of a disaggregated setup with GPUs (for prefill) and ASIC w. SRAM (for generation) 2 years ago. It makes sense for hardware vendors to execute on this. Getting the proportion right is hard but most important.","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"rapidapi","fetched_at":1770611957128,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,268],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/htqxfoczf1","expanded_url":"https://twitter.com/lqiao/status/1803063570596254171/photo/1","id_str":"1803059984017637376","indices":[269,292],"media_key":"3_1803059984017637376","media_url_https":"https://pbs.twimg.com/media/GQXBgKAXwAAmz-1.jpg","type":"photo","url":"https://t.co/htqXFoCzf1","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":622,"y":644,"h":97,"w":97},{"x":564,"y":112,"h":215,"w":215}]},"medium":{"faces":[{"x":500,"y":518,"h":78,"w":78},{"x":454,"y":90,"h":173,"w":173}]},"small":{"faces":[{"x":283,"y":293,"h":44,"w":44},{"x":257,"y":51,"h":98,"w":98}]},"orig":{"faces":[{"x":622,"y":644,"h":97,"w":97},{"x":564,"y":112,"h":215,"w":215}]}},"sizes":{"large":{"h":1148,"w":1490,"resize":"fit"},"medium":{"h":925,"w":1200,"resize":"fit"},"small":{"h":524,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1148,"width":1490,"focus_rects":[{"x":0,"y":0,"w":1490,"h":834},{"x":58,"y":0,"w":1148,"h":1148},{"x":129,"y":0,"w":1007,"h":1148},{"x":345,"y":0,"w":574,"h":1148},{"x":0,"y":0,"w":1490,"h":1148}]},"media_results":{"result":{"media_key":"3_1803059984017637376"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/htqxfoczf1","expanded_url":"https://twitter.com/lqiao/status/1803063570596254171/photo/1","id_str":"1803059984017637376","indices":[269,292],"media_key":"3_1803059984017637376","media_url_https":"https://pbs.twimg.com/media/GQXBgKAXwAAmz-1.jpg","type":"photo","url":"https://t.co/htqXFoCzf1","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[{"x":622,"y":644,"h":97,"w":97},{"x":564,"y":112,"h":215,"w":215}]},"medium":{"faces":[{"x":500,"y":518,"h":78,"w":78},{"x":454,"y":90,"h":173,"w":173}]},"small":{"faces":[{"x":283,"y":293,"h":44,"w":44},{"x":257,"y":51,"h":98,"w":98}]},"orig":{"faces":[{"x":622,"y":644,"h":97,"w":97},{"x":564,"y":112,"h":215,"w":215}]}},"sizes":{"large":{"h":1148,"w":1490,"resize":"fit"},"medium":{"h":925,"w":1200,"resize":"fit"},"small":{"h":524,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1148,"width":1490,"focus_rects":[{"x":0,"y":0,"w":1490,"h":834},{"x":58,"y":0,"w":1148,"h":1148},{"x":129,"y":0,"w":1007,"h":1148},{"x":345,"y":0,"w":574,"h":1148},{"x":0,"y":0,"w":1490,"h":1148}]},"media_results":{"result":{"media_key":"3_1803059984017637376"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1803063570596254171","view_count":50083,"bookmark_count":251,"created_at":1718718829000,"favorite_count":341,"quote_count":7,"reply_count":17,"retweet_count":73,"user_id_str":"51670467","conversation_id_str":"1803063570596254171","full_text":"š„ Firefunction-v2, new open-weights function-calling modelš„\n\nI'm super excited to announce Firefunction-v2, our latest open-weights! \n\n- Competitive with GPT-4o at function-calling\n- 1/10 of GPT-4o cost and 2x the speed\n- Retains both conversation and function-calling capabilities\n- Available on both Huggingface and Fireworks API","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,259],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/qT7g25BjS6","expanded_url":"https://x.com/lqiao/status/1760664322215379153/photo/1","id_str":"1760549320221401088","indices":[260,283],"media_key":"3_1760549320221401088","media_url_https":"https://pbs.twimg.com/media/GG66R2laIAAFS2V.jpg","type":"photo","url":"https://t.co/qT7g25BjS6","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":626,"w":1188,"resize":"fit"},"medium":{"h":626,"w":1188,"resize":"fit"},"small":{"h":358,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":626,"width":1188,"focus_rects":[{"x":0,"y":0,"w":1118,"h":626},{"x":0,"y":0,"w":626,"h":626},{"x":0,"y":0,"w":549,"h":626},{"x":0,"y":0,"w":313,"h":626},{"x":0,"y":0,"w":1188,"h":626}]},"media_results":{"result":{"media_key":"3_1760549320221401088"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fireworks.ai/blog/firefunctā¦","expanded_url":"https://fireworks.ai/blog/firefunction-v1-gpt-4-level-function-calling","url":"https://t.co/TmV3t0hMDl","indices":[260,283]},{"display_url":"fireworks.ai/blog/why-do-alā¦","expanded_url":"https://fireworks.ai/blog/why-do-all-LLMs-need-structured-output-modes","url":"https://t.co/TAAFDqyIvo","indices":[460,483]}],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/qT7g25BjS6","expanded_url":"https://x.com/lqiao/status/1760664322215379153/photo/1","id_str":"1760549320221401088","indices":[260,283],"media_key":"3_1760549320221401088","media_url_https":"https://pbs.twimg.com/media/GG66R2laIAAFS2V.jpg","type":"photo","url":"https://t.co/qT7g25BjS6","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":626,"w":1188,"resize":"fit"},"medium":{"h":626,"w":1188,"resize":"fit"},"small":{"h":358,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":626,"width":1188,"focus_rects":[{"x":0,"y":0,"w":1118,"h":626},{"x":0,"y":0,"w":626,"h":626},{"x":0,"y":0,"w":549,"h":626},{"x":0,"y":0,"w":313,"h":626},{"x":0,"y":0,"w":1188,"h":626}]},"media_results":{"result":{"media_key":"3_1760549320221401088"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1760664322215379153","view_count":96100,"bookmark_count":512,"created_at":1708610060000,"favorite_count":563,"quote_count":11,"reply_count":15,"retweet_count":105,"user_id_str":"51670467","conversation_id_str":"1760664322215379153","full_text":"š„ Structure is all you need. š„\n\nWeāre excited to announce:\n\n- FireFunction V1 - our new, open-weights function calling model:\n - GPT-4-level structured output and decision-routing\nat 4x lower latency\n - open-weights, commercially usable\n - Blog post: https://t.co/TmV3t0hMDl\n- JSON mode and grammar mode for ALL language models \n - Guarantee that your output adheres to the structure! \n - Less time fiddling on system prompts! \n - Blog post: https://t.co/TAAFDqyIvo","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,198],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/OBa5kHqvk1","expanded_url":"https://x.com/lqiao/status/1967641702484807695/photo/1","id_str":"1967640492264611840","indices":[199,222],"media_key":"3_1967640492264611840","media_url_https":"https://pbs.twimg.com/media/G052mksbYAAsNIM.jpg","type":"photo","url":"https://t.co/OBa5kHqvk1","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":738,"w":1448,"resize":"fit"},"medium":{"h":612,"w":1200,"resize":"fit"},"small":{"h":347,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":738,"width":1448,"focus_rects":[{"x":0,"y":0,"w":1318,"h":738},{"x":0,"y":0,"w":738,"h":738},{"x":0,"y":0,"w":647,"h":738},{"x":0,"y":0,"w":369,"h":738},{"x":0,"y":0,"w":1448,"h":738}]},"media_results":{"result":{"media_key":"3_1967640492264611840"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[]},"extended_entities":{"media":[{"display_url":"pic.x.com/OBa5kHqvk1","expanded_url":"https://x.com/lqiao/status/1967641702484807695/photo/1","id_str":"1967640492264611840","indices":[199,222],"media_key":"3_1967640492264611840","media_url_https":"https://pbs.twimg.com/media/G052mksbYAAsNIM.jpg","type":"photo","url":"https://t.co/OBa5kHqvk1","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":738,"w":1448,"resize":"fit"},"medium":{"h":612,"w":1200,"resize":"fit"},"small":{"h":347,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":738,"width":1448,"focus_rects":[{"x":0,"y":0,"w":1318,"h":738},{"x":0,"y":0,"w":738,"h":738},{"x":0,"y":0,"w":647,"h":738},{"x":0,"y":0,"w":369,"h":738},{"x":0,"y":0,"w":1448,"h":738}]},"media_results":{"result":{"media_key":"3_1967640492264611840"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1967641702484807695","view_count":35674,"bookmark_count":101,"created_at":1757957313000,"favorite_count":297,"quote_count":0,"reply_count":13,"retweet_count":26,"user_id_str":"51670467","conversation_id_str":"1967641702484807695","full_text":"Fireworks passed ASIC speed! \n\nFirst time, GPU based inference crossed an ASIC provider. \n\nBenchmark credit to AA\nModel: GPT-OSS-120B\nSpeed: 540 TPS\nLegend: Purple - Fireworks on B200; Orange - Groq https://t.co/OBa5kHqvk1","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,273],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/d24xsabjoe","expanded_url":"https://twitter.com/lqiao/status/1768302291604631557/photo/1","id_str":"1768298758935285760","indices":[274,297],"media_key":"3_1768298758935285760","media_url_https":"https://pbs.twimg.com/media/GIpCWjcaEAABSz4.jpg","type":"photo","url":"https://t.co/D24xSaBJoE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":482,"w":1142,"resize":"fit"},"medium":{"h":482,"w":1142,"resize":"fit"},"small":{"h":287,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":482,"width":1142,"focus_rects":[{"x":281,"y":0,"w":861,"h":482},{"x":529,"y":0,"w":482,"h":482},{"x":559,"y":0,"w":423,"h":482},{"x":650,"y":0,"w":241,"h":482},{"x":0,"y":0,"w":1142,"h":482}]},"media_results":{"result":{"media_key":"3_1768298758935285760"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"fireworks.ai/blog/fine-tuneā¦","expanded_url":"https://fireworks.ai/blog/fine-tune-launch","url":"https://t.co/Dr40ztUEXb","indices":[534,557]}],"user_mentions":[{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[28,43]},{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[28,43]}]},"extended_entities":{"media":[{"display_url":"pic.twitter.com/D24xSaBJoE","expanded_url":"https://twitter.com/lqiao/status/1768302291604631557/photo/1","id_str":"1768298758935285760","indices":[274,297],"media_key":"3_1768298758935285760","media_url_https":"https://pbs.twimg.com/media/GIpCWjcaEAABSz4.jpg","type":"photo","url":"https://t.co/D24xSaBJoE","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":482,"w":1142,"resize":"fit"},"medium":{"h":482,"w":1142,"resize":"fit"},"small":{"h":287,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":482,"width":1142,"focus_rects":[{"x":281,"y":0,"w":861,"h":482},{"x":529,"y":0,"w":482,"h":482},{"x":559,"y":0,"w":423,"h":482},{"x":650,"y":0,"w":241,"h":482},{"x":0,"y":0,"w":1142,"h":482}]},"media_results":{"result":{"media_key":"3_1768298758935285760"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1768302291604631557","view_count":36922,"bookmark_count":113,"created_at":1710431094000,"favorite_count":179,"quote_count":4,"reply_count":12,"retweet_count":31,"user_id_str":"51670467","conversation_id_str":"1768302291604631557","full_text":"š„ Weāre excited to announce @FireworksAI_HQ fine-tuning service! Fine-tune and run models like Mixtral at 300 tokens/sec with no extra inference cost!\n\nš Tune and iterate rapidly - Go from dataset to querying a fine-tuned model in minutes covering 100+ models. \nš¤ Seamless integration into blazing-fast inference - Your fine-tuned models can deploy serverless on the Fireworks inference platform. \nš° Competitive cost - Fireworks charges affordable rates per token of training data, with no additional service fee. \n\nLaunch blogpost: https://t.co/Dr40ztUEXb\n\nYou application development velocity matters deeply to us. Can't wait to see what awesome applications you build on top of it!","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,269],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/qmdKUjOS2d","expanded_url":"https://x.com/lqiao/status/1748243039766925351/photo/1","id_str":"1748236739263746049","indices":[270,293],"media_key":"3_1748236739263746049","media_url_https":"https://pbs.twimg.com/media/GEL8DUMaUAEt_TO.jpg","type":"photo","url":"https://t.co/qmdKUjOS2d","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":960,"w":986,"resize":"fit"},"medium":{"h":960,"w":986,"resize":"fit"},"small":{"h":662,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":960,"width":986,"focus_rects":[{"x":0,"y":408,"w":986,"h":552},{"x":0,"y":0,"w":960,"h":960},{"x":0,"y":0,"w":842,"h":960},{"x":0,"y":0,"w":480,"h":960},{"x":0,"y":0,"w":986,"h":960}]},"media_results":{"result":{"media_key":"3_1748236739263746049"}}}],"symbols":[],"timestamps":[],"urls":[{"display_url":"app.fireworks.ai/blog/firellavaā¦","expanded_url":"https://app.fireworks.ai/blog/firellava-the-first-commercially-permissive-oss-llava-model","url":"https://t.co/aBNd8PAF5N","indices":[802,825]}],"user_mentions":[{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[72,87]},{"id_str":"1575886662957047812","name":"Fireworks AI","screen_name":"FireworksAI_HQ","indices":[72,87]},{"id_str":"2267475408","name":"Haotian Liu","screen_name":"imhaotian","indices":[866,876]},{"id_str":"2745113140","name":"Chunyuan Li","screen_name":"ChunyuanLi","indices":[879,890]},{"id_str":"982116964008058880","name":"Yong Jae Lee","screen_name":"yong_jae_lee","indices":[893,906]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/qmdKUjOS2d","expanded_url":"https://x.com/lqiao/status/1748243039766925351/photo/1","id_str":"1748236739263746049","indices":[270,293],"media_key":"3_1748236739263746049","media_url_https":"https://pbs.twimg.com/media/GEL8DUMaUAEt_TO.jpg","type":"photo","url":"https://t.co/qmdKUjOS2d","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":960,"w":986,"resize":"fit"},"medium":{"h":960,"w":986,"resize":"fit"},"small":{"h":662,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":960,"width":986,"focus_rects":[{"x":0,"y":408,"w":986,"h":552},{"x":0,"y":0,"w":960,"h":960},{"x":0,"y":0,"w":842,"h":960},{"x":0,"y":0,"w":480,"h":960},{"x":0,"y":0,"w":986,"h":960}]},"media_results":{"result":{"media_key":"3_1748236739263746049"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1748243039766925351","view_count":56523,"bookmark_count":317,"created_at":1705648596000,"favorite_count":347,"quote_count":13,"reply_count":10,"retweet_count":61,"user_id_str":"51670467","conversation_id_str":"1748243039766925351","full_text":"š„š„Announcing FireLLaVA -- the first multi-modal LLaVA model, trained by @FireworksAI_HQ , with a commercially permissive license. Itās also our first open source model! \n\nWhile the industry heavily uses text-based foundation models to generate responses, in real-world⦠https://t.co/qmdKUjOS2d","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,65],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/xrfwfqrlsy","expanded_url":"https://twitter.com/lqiao/status/1767406833692737971/photo/1","id_str":"1767406072372092928","indices":[66,89],"media_key":"3_1767406072372092928","media_url_https":"https://pbs.twimg.com/media/GIcWdW2bYAASmRx.jpg","type":"photo","url":"https://t.co/xrFWfqRLSY","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1030,"w":1688,"resize":"fit"},"medium":{"h":732,"w":1200,"resize":"fit"},"small":{"h":415,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1030,"width":1688,"focus_rects":[{"x":0,"y":0,"w":1688,"h":945},{"x":0,"y":0,"w":1030,"h":1030},{"x":0,"y":0,"w":904,"h":1030},{"x":0,"y":0,"w":515,"h":1030},{"x":0,"y":0,"w":1688,"h":1030}]},"media_results":{"result":{"media_key":"3_1767406072372092928"}}},{"display_url":"pic.x.com/xrfwfqrlsy","expanded_url":"https://twitter.com/lqiao/status/1767406833692737971/photo/1","id_str":"1767406534991237120","indices":[66,89],"media_key":"3_1767406534991237120","media_url_https":"https://pbs.twimg.com/media/GIcW4SPbUAAYIrW.jpg","type":"photo","url":"https://t.co/xrFWfqRLSY","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":888,"w":1704,"resize":"fit"},"medium":{"h":625,"w":1200,"resize":"fit"},"small":{"h":354,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":888,"width":1704,"focus_rects":[{"x":0,"y":0,"w":1586,"h":888},{"x":0,"y":0,"w":888,"h":888},{"x":0,"y":0,"w":779,"h":888},{"x":160,"y":0,"w":444,"h":888},{"x":0,"y":0,"w":1704,"h":888}]},"media_results":{"result":{"media_key":"3_1767406534991237120"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"1743487864934162432","name":"Artificial Analysis","screen_name":"ArtificialAnlys","indices":[49,65]}]},"extended_entities":{"media":[{"display_url":"pic.twitter.com/xrFWfqRLSY","expanded_url":"https://twitter.com/lqiao/status/1767406833692737971/photo/1","id_str":"1767406072372092928","indices":[66,89],"media_key":"3_1767406072372092928","media_url_https":"https://pbs.twimg.com/media/GIcWdW2bYAASmRx.jpg","type":"photo","url":"https://t.co/xrFWfqRLSY","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":1030,"w":1688,"resize":"fit"},"medium":{"h":732,"w":1200,"resize":"fit"},"small":{"h":415,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":1030,"width":1688,"focus_rects":[{"x":0,"y":0,"w":1688,"h":945},{"x":0,"y":0,"w":1030,"h":1030},{"x":0,"y":0,"w":904,"h":1030},{"x":0,"y":0,"w":515,"h":1030},{"x":0,"y":0,"w":1688,"h":1030}]},"media_results":{"result":{"media_key":"3_1767406072372092928"}}},{"display_url":"pic.twitter.com/xrFWfqRLSY","expanded_url":"https://twitter.com/lqiao/status/1767406833692737971/photo/1","id_str":"1767406534991237120","indices":[66,89],"media_key":"3_1767406534991237120","media_url_https":"https://pbs.twimg.com/media/GIcW4SPbUAAYIrW.jpg","type":"photo","url":"https://t.co/xrFWfqRLSY","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":888,"w":1704,"resize":"fit"},"medium":{"h":625,"w":1200,"resize":"fit"},"small":{"h":354,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":888,"width":1704,"focus_rects":[{"x":0,"y":0,"w":1586,"h":888},{"x":0,"y":0,"w":888,"h":888},{"x":0,"y":0,"w":779,"h":888},{"x":160,"y":0,"w":444,"h":888},{"x":0,"y":0,"w":1704,"h":888}]},"media_results":{"result":{"media_key":"3_1767406534991237120"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1767406833692737971","view_count":46765,"bookmark_count":93,"created_at":1710217600000,"favorite_count":180,"quote_count":2,"reply_count":10,"retweet_count":21,"user_id_str":"51670467","conversation_id_str":"1767406833692737971","full_text":"š„Fireworks beats Groq! š„\n\nBenchmark credits from @ArtificialAnlys https://t.co/xrFWfqRLSY","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0},{"bookmarked":false,"display_text_range":[0,274],"entities":{"hashtags":[],"media":[{"display_url":"pic.x.com/9GXFEWuTjz","expanded_url":"https://x.com/lqiao/status/1992309156859228413/photo/1","id_str":"1992307163314978816","indices":[275,298],"media_key":"3_1992307163314978816","media_url_https":"https://pbs.twimg.com/media/G6YYzzDbAAAw-bc.jpg","type":"photo","url":"https://t.co/9GXFEWuTjz","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":684,"w":1094,"resize":"fit"},"medium":{"h":684,"w":1094,"resize":"fit"},"small":{"h":425,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":684,"width":1094,"focus_rects":[{"x":0,"y":71,"w":1094,"h":613},{"x":0,"y":0,"w":684,"h":684},{"x":0,"y":0,"w":600,"h":684},{"x":0,"y":0,"w":342,"h":684},{"x":0,"y":0,"w":1094,"h":684}]},"media_results":{"result":{"media_key":"3_1992307163314978816"}}}],"symbols":[],"timestamps":[],"urls":[],"user_mentions":[{"id_str":"778764142412984320","name":"Hugging Face","screen_name":"huggingface","indices":[605,617]},{"id_str":"1975088327431999488","name":"rLLM","screen_name":"rllm_project","indices":[620,633]},{"id_str":"776585502606721024","name":"PyTorch","screen_name":"PyTorch","indices":[645,653]},{"id_str":"4398626122","name":"OpenAI","screen_name":"OpenAI","indices":[706,713]},{"id_str":"1891523745929732096","name":"Thinking Machines","screen_name":"thinkymachines","indices":[723,738]}]},"extended_entities":{"media":[{"display_url":"pic.x.com/9GXFEWuTjz","expanded_url":"https://x.com/lqiao/status/1992309156859228413/photo/1","id_str":"1992307163314978816","indices":[275,298],"media_key":"3_1992307163314978816","media_url_https":"https://pbs.twimg.com/media/G6YYzzDbAAAw-bc.jpg","type":"photo","url":"https://t.co/9GXFEWuTjz","ext_media_availability":{"status":"Available"},"features":{"large":{"faces":[]},"medium":{"faces":[]},"small":{"faces":[]},"orig":{"faces":[]}},"sizes":{"large":{"h":684,"w":1094,"resize":"fit"},"medium":{"h":684,"w":1094,"resize":"fit"},"small":{"h":425,"w":680,"resize":"fit"},"thumb":{"h":150,"w":150,"resize":"crop"}},"original_info":{"height":684,"width":1094,"focus_rects":[{"x":0,"y":71,"w":1094,"h":613},{"x":0,"y":0,"w":684,"h":684},{"x":0,"y":0,"w":600,"h":684},{"x":0,"y":0,"w":342,"h":684},{"x":0,"y":0,"w":1094,"h":684}]},"media_results":{"result":{"media_key":"3_1992307163314978816"}}}]},"favorited":false,"lang":"en","possibly_sensitive":false,"possibly_sensitive_editable":true,"retweeted":false,"fact_check":null,"id":"1992309156859228413","view_count":24665,"bookmark_count":203,"created_at":1763838492000,"favorite_count":208,"quote_count":3,"reply_count":9,"retweet_count":39,"user_id_str":"51670467","conversation_id_str":"1992309156859228413","full_text":"š Eval Protocol is Open Sourced!\n\nReinforcement fine-tuning is complicated, because there are hundreds of environments and tens of trainers you can pick and choose and integrate with. Even worse, in production, agents donāt live in clean āgyms.ā They operate in messy, async environments - flaky APIs, partial observability, conflicting objectives, long feedback loops. \n\nWe solve that problem by open sourcing Eval Protocol. The goal is for you to build your production RFT flow without reinventing the wheel of managing such complex integration. \nš Day 0 support for trainers and environments like TRL (@huggingface), @rllm_project , OpenEnv (@PyTorch),Ā as well as support for proprietary trainers like @OpenAI RFT and @thinkymachines Tinker. More to come.\nš Instrument agents in production instead of toy or simulated environments\nš Move from offline benchmarks to live, continuous improvement","in_reply_to_user_id_str":null,"in_reply_to_status_id_str":null,"is_quote_status":0,"is_ai":null,"ai_score":null,"source":"scraping","fetched_at":null,"edit_history_tweet_ids":null,"poll_10min_at":null,"poll_3day_at":null,"poll_count":0,"poll_complete":0}],"activities":null,"interactions":null,"interactions_updated":null,"created":1774576200475,"updated":1774576200475,"type":"the innovator","hits":1},"people":[{"user":{"id":"446719282","name":"Shivon Zilis","description":"Artificial intelligence, biological intelligence, and whatever exists in between and beyond. Made in Canada.","followers_count":241624,"friends_count":1715,"statuses_count":3610,"profile_image_url_https":"https://pbs.twimg.com/profile_images/755006920314937344/PPQ8LKFs_normal.jpg","screen_name":"shivon","location":"Palo Alto, CA","entities":{"description":{"urls":[]}}},"details":{"type":"The Innovator","description":"A curious bridge-builder at the intersection of artificial and biological intelligence ā warm, candid, and relentlessly future-focused. Shivon blends technical curiosity with personal humanity, making complex ideas feel inviting and urgent. Proudly Canadian and unafraid to learn out loud.","purpose":"To accelerate thoughtful integration of AI and biological intelligence by translating deep technical ideas into accessible conversations, fostering collaborations that steer powerful technologies toward beneficial, long-term outcomes for humanity.","beliefs":"Values curiosity, scientific rigor, and existential responsibility; believes technology should be pursued for its long-term relevance to our species rather than short-term profit. She trusts open dialogue, mentorship, and empathetic communication as tools to widen participation in shaping the future.","facts":"Made in Canada. Regularly uses conversational AIs as study partners (once had an unexpectedly rewarding hour-long physics deep-dive with Grok). Proud parent who shares glimpses of family life alongside cutting-edge ideas.","strength":"Skilled at synthesizing complex, cross-disciplinary concepts and communicating them with warmth; cultivates high-trust relationships across industry and public spheres, and drives high engagement by pairing technical credibility with personal authenticity.","weakness":"Her close association with high-profile figures and bold existential framing can polarize audiences; a tendency to wear many hats (investor, communicator, learner, parent) sometimes opens her to intensified scrutiny and emotional vulnerability online.","roast":"Sheās building the future of intelligence and can explain quantum entanglement between Instagram posts ā but still asks an AI to quiz her because admitting you need help is the new flex. Cute, efficient, and suspiciously good at making the rest of us look like we skipped homework.","win":"Built a highly engaged audience (241k+ followers) that follows both her personal moments and substantive AI conversations, turning private curiosity into public momentum for smarter, more humane tech discussion.","recommendation":"Grow on X by running short, teachable thread series (e.g., '5-minute AI & Bio explainer' threads), hosting regular Spaces for live Q&As with experts, pinning an evergreen primer on her stance/approach to AI, leveraging succinct visuals or micro-videos to explain tricky concepts, and amplifying community questions ā make learning participatory so followers become advocates."},"created":1774577820882,"type":"the innovator","id":"shivon"},{"user":{"id":"5925542","name":"Casey","description":"š» A N A R C H O ā C A T B U S\nš https://t.co/Qy87VuaizA\nš„µ https://t.co/2cPqL2xpuh\nšø https://t.co/NGZ3GeedWR\nš¤ https://t.co/eG0CZL5IXH\nš https://t.co/4MdA62CeUh","followers_count":239347,"friends_count":385,"statuses_count":12228,"profile_image_url_https":"https://pbs.twimg.com/profile_images/925271448209330176/CQK9OiW9_normal.jpg","screen_name":"rodarmor","location":"The Blue Planet","entities":{"description":{"urls":[{"display_url":"ordinals.com","expanded_url":"https://ordinals.com","indices":[32,55],"url":"https://t.co/Qy87VuaizA"},{"display_url":"hell.money","expanded_url":"http://hell.money","indices":[58,81],"url":"https://t.co/2cPqL2xpuh"},{"display_url":"fun.film","expanded_url":"https://fun.film","indices":[84,107],"url":"https://t.co/NGZ3GeedWR"},{"display_url":"just.systems","expanded_url":"https://just.systems","indices":[110,133],"url":"https://t.co/eG0CZL5IXH"},{"display_url":"rodarmor.com","expanded_url":"https://rodarmor.com","indices":[136,159],"url":"https://t.co/4MdA62CeUh"}]}}},"details":{"type":"The Innovator","description":"Casey ā the self-styled ANARCHOā¢CATBUS ā is a code-first Bitcoin builder who ships protocol work, docs, and hot takes in equal measure. They turn technical releases (like ord 0.17.0 / runes) into community events and viral threads. Expect blunt humor, dev-grade explanations, and a talent for making complex ideas shareable.","purpose":"To push Bitcoin tooling and standards forward by building, documenting, and catalyzing community adoption ā turning experimental protocol work into usable, well-explained primitives that others can build on.","beliefs":"Decentralization and Bitcoin-first priorities; open-source transparency and rigorous technical debate; healthy skepticism of hype (but a willingness to engage with the ādegenā culture to educate and redirect it); clear docs and reproducible implementations over marketing gloss.","facts":"Fun fact: Casey calls themself ANARCHOāCATBUS, has 239,347 followers, and has tweeted 12,228 times. They led the release of ord 0.17.0 (including the genesis rune UNCOMMONā¢GOODS), published plain-English docs for runes, and once joked theyād commit seppuku if the runes marketcap didnāt hit $1B in a month ā which is equal parts meme and marketing.","strength":"Deep technical credibility + the audience reach to make protocol work matter. Excellent at shipping code, writing clear docs, and creating viral, opinionated content that mobilizes developers and users.","weakness":"Provocative humor and extreme hyperbole can polarize and alienate some community members; bluntness sometimes reads as dismissive rather than constructive. Occasional overreliance on meme-thrill could overshadow nuance when needed.","roast":"You call yourself ANARCHOāCATBUS and threaten ritual seppuku over market caps ā congratulations, youāre the only protocol maintainer whose release notes could double as a punk zine manifesto and a midlife crisis tweetstorm.","win":"Shipping ord 0.17.0 with the final runes implementation and clear, plain-English docs that rallied wide community attention and drove hundreds of thousands of views and conversations.","recommendation":"Pin a clear āStart Hereā thread linking the docs and a TL;DR; run a weekly technical thread series (short explainer + code snippet + visuals) to onboard devs, host regular X Spaces AMAs for real-time Q&A, post short explainer videos/diagrams for non-devs, collaborate with other builders for cross-promotion, engage critics constructively in replies, and use polls or mini-demos to turn viral attention into GitHub stars, contributors, and newsletter sign-ups."},"created":1774577656443,"type":"the innovator","id":"rodarmor"},{"user":{"id":"449094976","name":"Johnny Ho","description":"Cofounder, CSO @perplexity_ai. Former high frequency trader, competitive programmer. Think fast, build faster.","followers_count":39439,"friends_count":242,"statuses_count":96,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1784680407792779264/ddkiBgsI_normal.jpg","screen_name":"randomjohnnyh","location":"New York, NY","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"perplexity.com","expanded_url":"https://www.perplexity.com","indices":[0,23],"url":"https://t.co/GejABxxFvB"}]}}},"details":{"type":"The Innovator","description":"Cofounder and CSO at Perplexity who moves at light speed ā ex-high-frequency trader and competitive programmer who turns ambitious ideas into shipping products. Focused on infrastructure, agentic systems, and devex to make AI reliably useful. Thinks fast, builds faster.","purpose":"To remove friction between people and complex information by building fast, reliable AI infrastructure and agentic tooling that lets computers take on the hardest tasks so humans can focus on higher-leverage work.","beliefs":"Values speed, pragmatic experimentation, and measurable reliability. Believes in shipping early and iterating, automating context switching, empowering developer experience, and letting data guide decisions over dogma.","facts":"Fun fact: Johnny's very first project at Perplexity was an AI stock screener when LLMs were ~50% reliable ā today it's back in product at ~99% reliability. Also, there's an unofficial office protocol he mentions: ask the AI before asking another person to reduce context switching.","strength":"Rapid product- and infra-level execution, deep technical chops from HFT and competitive programming, strong focus on reliability and instrumentation, and the ability to rally engineering teams to ship complex features fast.","weakness":"Can prioritize speed over polish or community storytelling, occasionally terse in public engagement, and may under-index on audience-facing content (low tweet volume relative to impact) which slows personal brand growth.","roast":"Johnny moves so fast he asks an AI for permission before he finishes typing his own thought ā blink during his standup and you'll miss three launches and a pull request that already auto-merged.","win":"Revived his very first Perplexity project ā an AI stock screener ā and shipped it to production, improving from ~50% LLM reliability two years ago to ~99% today; plus led major infra improvements powering Deep Research and Computer.","recommendation":"Grow on X by translating infra wins into accessible narratives: post short explainer threads (before/after metrics, root causes, fixes), share lightweight demos/screenshots or short videos, run occasional AMAs/Spaces about agentic tooling, spotlight team stories and post-mortems, engage frequently with replies, and pin a signature thread called something like āBuild with Johnnyā that bundles lessons, code snippets, and product milestones."},"created":1774577513385,"type":"the innovator","id":"randomjohnnyh"},{"user":{"id":"13235832","name":"Nat Friedman","description":"https://t.co/Lhh178sIjq","followers_count":279291,"friends_count":833,"statuses_count":5674,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1677873294/image_normal.jpg","screen_name":"natfriedman","location":"California","entities":{"description":{"urls":[{"display_url":"nat.org","expanded_url":"http://nat.org","indices":[0,23],"url":"https://t.co/Lhh178sIjq"}]}}},"details":{"type":"The Innovator","description":"A tech entrepreneur who funds and builds audacious, science-forward projects ā from cracking 2,000āyearāold Herculaneum scrolls to testing plastics in Bay Area foods. He mixes deep technical ability with public-minded prizes and blunt, highāsignal takes on the state of tech and policy. Expect big experiments, bold bets, and lively threads.","purpose":"To use technology, incentives, and community muscle to unlock hidden knowledge and solve stubborn realāworld problems ā accelerating discovery, preserving cultural heritage, and turning curiosity into measurable breakthroughs.","beliefs":"Believes in building tools not just for profit but for progress: open collaboration, meritocratic teams, rigorous engineering, and bold, prizeādriven approaches that attract talent worldwide. Values evidence, practical impact, and transparency, and trusts engineers everywhere to deliver when given the resources and challenge.","facts":"Fun fact: Nat launched the Vesuvius Challenge and helped read Herculaneum scrolls after 2,000 years (first revealed word: 'ĻĪæĻĻĻ ĻαĻ'). Profile snapshot: ~279,291 followers, following 833, and ~5,674 tweets. The Vesuvius Challenge awarded a $700,000 grand prize and announced a new $100,000 prize for 2024.","strength":"Visionary projectābuilder who can fund and mobilize top talent, translate technical complexity into public narratives, and attract media and community attention to big, otherwise-neglected problems.","weakness":"Bluntness on social media can provoke polarization; covering many disparate topics risks diluting a core audience; occasionally trades nuance for punchy takes that spark heated replies.","roast":"Youāre the only person who will bankroll a $700k archaeology prize, crow about reading ancient scrolls, and then spend the next afternoon arguing about takeout plastics on Twitter ā basically Indiana Jones with a startup pitch deck and a very stubborn reply button.","win":"Spearheaded the Vesuvius Challenge that decoded parts of the Herculaneum scrolls for the first time in 2,000 years, crowning a $700k-winning team and publicly revealing never-before-seen ancient text.","recommendation":"Tell the story like a serialized documentary: post tight, image-rich threads that show stepābyāstep progress on projects (code, scans, people), pin milestone threads, run regular Spaces/AMAs with winning teams and scholars, share short explainer videos and data visualizations, tag collaborators to amplify reach, convert interest into a newsletter or mailing list for deeper engagement, and use targeted promoted posts to turn curious viewers into longāterm followers."},"created":1774576954911,"type":"the innovator","id":"natfriedman"},{"user":{"id":"180115578","name":"Mira Murati","description":"Now building @thinkymachines. Previously CTO @OpenAI","followers_count":452578,"friends_count":604,"statuses_count":340,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1891915096278073344/-pNYsXgr_normal.jpg","screen_name":"miramurati","location":"San Francisco, CA","entities":{"description":{"urls":[]}}},"details":{"type":"The Innovator","description":"A product-minded technologist who builds infrastructure and teams to make advanced AI useful and understandable. Former CTO of OpenAI now founding Thinking Machines to push open science, strong foundations, and practical applications. Combines deep engineering chops with a people-first leadership style.","purpose":"To advance AI by creating solid technical foundations, practical tools people can actually use, and an open scientific culture that spreads knowledge ā so powerful systems become widely useful, transparent, and responsibly deployed.","beliefs":"Believes in people-first engineering, open science, and rigorous foundations as the path to trustworthy, useful AI. Values collaboration, transparency, and building tools that adapt to real human needs rather than hype. Trusts that scaling capability must go hand-in-hand with clarity and responsibility.","facts":"Fun fact: Mira launched the ChatGPT iOS app and served as CTO of OpenAI before founding Thinking Machines (now @thinkymachines). She reaches a large audience (452,578 followers) and frequently tweets short, people-focused notes that highlight team and product wins.","strength":"Combines deep technical expertise with product instincts and team-building ā able to translate research into shipped products, inspire engineers, and communicate complex ideas succinctly. Skilled at launching high-impact features and leading large engineering organizations.","weakness":"Can be spread thin across big ambitions; balancing foundational research, product deadlines, and public-facing commitments risks diluting focus. Public visibility invites intense scrutiny, and a people-first stance sometimes means internal trade-offs slow rapid iteration.","recommendation":"Grow on X by mixing short, high-impact product posts with explainers and behind-the-scenes threads: (1) post concise technical threads that break down one idea per thread, (2) spotlight teammates and lab wins to amplify the org, (3) share reproducible mini-demos or visualizations, (4) host periodic Spaces/AMA sessions, and (5) pin a clear intro thread about Thinking Machines' mission and how followers can engage or contribute.","roast":"You move so fast you probably have a 'launch' button for your coffee machine ā and it ships with a changelog. Also, your inbox has trust issues: it assumes everything you send will become a product roadmap.","win":"Led engineering and product efforts at OpenAI that shipped major products (including the ChatGPT iOS app) and then founded Thinking Machines to scale open, foundational work ā a rare blend of technical leadership, product delivery, and community-building that reshaped how people use AI."},"created":1774576830523,"type":"the innovator","id":"miramurati"},{"user":{"id":"1349186994048561153","name":"Mihika Kapoor","description":"head of product @simile_ai ⢠prev @figma @meta ⢠product-design-eng hybrid","followers_count":17426,"friends_count":1245,"statuses_count":1007,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1556839810718961664/qr-Lybdb_normal.jpg","screen_name":"mihikapoor","location":"in a simulation","entities":{"description":{"urls":[]}}},"details":{"type":"The Innovator","description":"Mihika Kapoor is a product-design-engineering hybrid who ships delightful, human-centered experiencesāex-Figma and Meta, now Head of Product at Simile.ai. She moves fast on big ideas, from debuting Figma Slides to launching a foundation model that simulates human behavior. Her feed mixes product announcements, behind-the-scenes joy, and earnest design nerdery.","purpose":"To build tools that make complex human behavior understandable and usefulāso people and teams can tell better stories, make better decisions, and create more meaningful experiences at scale.","beliefs":"Deep empathy for real humans, design as a force-multiplier, iteration beats perfection, transparency in product tradeoffs, and that technology should augment human creativity rather than hide it.","facts":"Fun fact: Mihika celebrates wins with plantsāshe once ordered too many for a new office and gifted the leftovers to coworkers. Also, she announced Figma Slides (aka Flides) with real emotional flair and has a habit of making product launches feel like tiny parties.","strength":"Cross-discipline fluency (design + engineering + product), strong storytelling around launches, ability to ship impactful features, and credibility from top-tier experiences (Figma, Meta) that attracts attention and talent.","weakness":"Obsesses over shipping the right thing which can turn into impatience with slow processes or endless polish; can take on too much because she trusts her own roadmap more than delegation.","recommendation":"Grow on X by turning product moments into learning threads: short explainer threads that break down design decisions, trade-offs, and metrics; demo clips and GIFs of agent simulations; regular behind-the-scenes posts (day-in-the-life, postmortems); run AMAs and Spaces after big launches; reply to high-value threads to build conversation; pin a concise āwhat Simile isā thread and a CTA to demo or subscribe. Use visuals, cadence (2ā4 thoughtful posts/week), and convert replies into follow-up content.","roast":"Mihika builds AI that predicts human behavior ā presumably so she can finally justify why she bought 37 plants for the office and still calls it 'optimizing for wellbeing.'","win":"Debuted Figma Slides to the world and helped lead Simile out of stealthāshipping a product at Figma that reached massive attention and launching a company building the first large-scale human-behavior simulation."},"created":1774576795547,"type":"the innovator","id":"mihikapoor"},{"user":{"id":"1230746132122234880","name":"Murtaza Dalal","description":"AI @ Tesla Optimus | Robotics/AI PhD @CMU_Robotics, EECS @UCBerkeley | Opinions entirely and unequivocally my own","followers_count":14474,"friends_count":620,"statuses_count":343,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1825637504420294656/Pa8KDUTd_normal.jpg","screen_name":"mihdalal","location":"San Jose, CA","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"mihdalal.github.io","expanded_url":"https://mihdalal.github.io/","indices":[0,23],"url":"https://t.co/xirYf3ot9i"}]}}},"details":{"type":"The Innovator","description":"Murtaza Dalal is a hands-on robotics innovator who blends CMU/UC Berkeley pedigree with product-scale ambition at Tesla Optimus. He turns hard robotics research into jaw-dropping demos and crisp takes that get the community buzzing. His tagline says it all: opinions entirely (and unequivocally) his own.","purpose":"Drive humanoid robots from lab curiosities to practical learners ā scaling generalist policies, sim2real, and RL so robots can learn rich, dexterous skills from everyday human video instead of costly teleoperation.","beliefs":"Believes scalability beats brittle hand-engineering, that learning from human data is a game-changer, and that demos + results matter more than hot takes. Values rigorous experimentation, fast iteration, elite collaboration, and communicating breakthroughs in ways that excite both engineers and the public.","facts":"Fun fact: Murtaza announced joining Tesla Optimus and helped showcase demos that went viral (one tweet hit ~448k views). Heās got a Robotics/AI PhD connection to CMU and EECS roots at UC Berkeley, 14,474 followers, follows 620, and has tweeted 343 times.","strength":"Deep technical chops in robotics/AI, talent for turning complex research into compelling demos, strong network across top labs, and an engaging public voice that drives attention and recruitment.","weakness":"Can lean on hype and demo-gloss (which invites skeptics), sometimes sacrifices nuance for bite-sized excitement, and risks polarizing conversations when replies get heated.","roast":"Your robots can pull off perfect bimanual manipulation and dance routines, yet you still insist on the 'opinions entirely my own' line ā sure, because the bots do all the PR while you humblebrag in 280 characters.","win":"Helping deliver Optimus demos that demonstrate bi-manual, dexterous manipulation learned from human video (reducing reliance on teleop) and turning those demos into viral momentum and hiring buzz.","recommendation":"On X, lean into explainers + clips: post short demo videos with 1ā3 tweet threads that unpack the key idea, share behind-the-scenes snippets and failure cases, host AMAs or Spaces after big demos, pin a concise roadmap tweet, engage skeptically but constructively with critics, and collaborate/quote other researchers to amplify reach. Use captions and alt text so demos are accessible and reshare reproducible notebooks or papers when possible."},"created":1774576775301,"type":"the innovator","id":"mihdalal"},{"user":{"id":"1270149841","name":"Keerthana Gopalakrishnan","description":"Research Lead Gemini Robotics @GoogleDeepmind. Author of \"AI for Robotics\" book. Opinions my own.","followers_count":26339,"friends_count":1128,"statuses_count":1795,"profile_image_url_https":"https://pbs.twimg.com/profile_images/2004038362915872768/cl4EO2kI_normal.jpg","screen_name":"keerthanpg","location":"San Francisco, CA","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"keerthanapg.com","expanded_url":"https://keerthanapg.com","indices":[0,23],"url":"https://t.co/nYi5COii5A"}]}}},"details":{"type":"The Innovator","description":"Keerthana Gopalakrishnan is a Research Lead for Gemini Robotics at Google DeepMind, author of the book \"AI for Robotics,\" and a public voice on the future of on-device AI and physical AGI. She blends hands-on engineering enthusiasm with big-picture thinking, from humanoids to pocket-sized LLMs. Her timeline mixes technical wins, hiring calls, and delightfully human takes on culture and craft.","purpose":"To push the frontier where large models meet real-world hardware ā building practical, reliable AGI in the physical world and making advanced AI accessible (and useful) on everyday devices.","beliefs":"She values rigorous engineering, interdisciplinary collaboration, and the idea that engineering feats deserve wider recognition; she believes practical demos and open curiosity accelerate adoption and understanding of AI. She also prizes clear, independent thinking (hence \"opinions my own\").","facts":"Fun fact: She's the author of \"AI for Robotics\" and her tweet about Gemini Nano running on a phone reached ~248k views ā proof she can explain cutting-edge tech to a wide audience. Also, she once publicly argued engineers deserve Nobels for reusable rockets.","strength":"Deep technical credibility plus storytelling: can translate complex robotics and LLM advances into shareable, inspiring posts; strong hiring/leadership presence; cross-domain fluency (hardware, models, product).","weakness":"Sometimes leans very technical, which can alienate non-specialist followers; candid takes (e.g., demanding Nobels for engineers) can polarize; as a busy lead, posting cadence may be uneven.","roast":"You build robots that can change humanity's future, champion on-device AI, and still preface every hot take with \"opinions my own\" ā as if a legal disclaimer can contain your tendency to steal every tech conversation.","win":"Leading Gemini Robotics at DeepMind and publishing \"AI for Robotics,\" while driving viral awareness for on-device LLMs (one Gemini Nano tweet hit ~248k views) ā a rare combo of research leadership, authorship, and broad public impact.","recommendation":"Grow on X by turning technical milestones into snackable threads and behind-the-scenes demos: 1) Post short demo videos or GIFs of robotics work with a 2ā5 tweet explainer. 2) Create TL;DR threads for papers and include a one-line takeaway for non-experts. 3) Run occasional AMAs or X Spaces on engineering trade-offs. 4) Pin a hiring thread and repurpose job posts into micro-stories about team wins. 5) Use polls and quick takes to invite cross-discipline conversation and boost engagement from beyond the research bubble."},"created":1774575777476,"type":"the innovator","id":"keerthanpg"},{"user":{"id":"1390993543","name":"Katarina Batina","description":"Design Director @shop. Previously @classpass @artsy. Always Katarina never Kat.","followers_count":8306,"friends_count":2587,"statuses_count":3980,"profile_image_url_https":"https://pbs.twimg.com/profile_images/2001343603478552577/hOUqaypI_normal.jpg","screen_name":"katarinabatina","location":"Seattle, WA","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"katarinabatina.com","expanded_url":"http://katarinabatina.com","indices":[0,23],"url":"https://t.co/yPzkhJvAwG"}]}}},"details":{"type":"The Innovator","description":"Design Director at Shop who prototypes the future one elegant pixel (and prototype) at a time. Formerly at ClassPass and Artsy, she loves redesigning spaces, products, and workflows with striking attention to detail. Always Katarina ā never Kat.","purpose":"To push product and physical design boundaries by creating elegant, surprising solutions that feel inevitable in hindsight ā making everyday experiences more useful, beautiful, and delightful for people and brands.","beliefs":"Good design is a blend of craft, research, and relentless iteration; collaboration beats ego; detailed thinking scales across teams; real innovation comes from prototyping and shipping, not just theorizing.","facts":"Fun fact: she once designed a standing desk with no legs or interface that actually works (tweeted to ~457k views). She led Shop's web refresh and often shares behind-the-scenes office and product redesigns. Always signs as Katarina, never Kat.","strength":"Exceptional product and spatial design instincts, strong prototyping chops, high-status credibility (Design Director @Shop), ability to translate artful inspiration into pragmatic solutions, and an engaged audience receptive to visual process posts.","weakness":"Perfectionism can slow rollouts and make delegation tricky; deep focus on craft sometimes narrows the audience to design peers rather than broader product or business communities.","roast":"Katarina will redesign your desk, your homepage, and your sense of self-worth ā then enlighten you with a three-slide framework while insisting you call her Katarina, not Kat. Sheās the reason your office looks cooler than your apartment.","win":"Viral proof-of-concept: the legless standing desk tweet (hundreds of thousands of views) and leading Shop's recent web/design refresh ā tangible projects that fused craft, product impact, and public recognition.","recommendation":"Grow on X by leaning into process: post short multi-tweet threads showing beforeāafter with 3ā5 step photos, prototype videos, and quick lessons learned; host occasional Spaces on product design; collaborate with product managers/engineers for cross-audience threads; reuse high-performing visuals into pinned threads and a monthly newsletter sign-up; and sprinkle in personality (the 'always Katarina' bit) to make the brand memorable."},"created":1774575749810,"type":"the innovator","id":"katarinabatina"},{"user":{"id":"114088702","name":"Julian Ibarz","description":"TeslaBot Optimus AI Lead","followers_count":34318,"friends_count":302,"statuses_count":387,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1834768943657238529/iRw6z5fs_normal.jpg","screen_name":"julianibarz","location":"","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"linkedin.com/in/julianibarz/","expanded_url":"https://www.linkedin.com/in/julianibarz/","indices":[0,23],"url":"https://t.co/jlPBhohS7G"}]}}},"details":{"type":"The Innovator","description":"Julian Ibarz is the TeslaBot Optimus AI Lead who turns sciāfi into sprint plans and prototype videos. He mixes deep technical chops with production-focused hustle, posting high-impact demos and candid program updates. His timeline reads like a progress report for the future of humanoid robotics.","purpose":"To push humanoid robotics from lab demos into reliable, scalable products that help peopleāby rapidly iterating on autonomy, manipulation, and production engineering until robots can safely and usefully work alongside humans.","beliefs":"Believes in engineering-first progress: build fast, test in the real world, and iterate from failures; values scalability and manufacturability as much as algorithmic novelty; trusts that practical deployment is the fastest path to meaningful AI and robotics impact.","facts":"Fun fact: Julian helped launch the first production vision-based deep neural network at Google that automated Street View number detection, and today leads Optimus AI development at Tesla with prototype hands that can catch balls and robots that can dock, climb stairs, and hand out snacks. He has ~34k followers, follows 302 accounts, and has tweeted 387 timesāso heās prolific but selective.","strength":"Combines deep research experience with production and manufacturing savvy; excellent at communicating concrete progress (high-engagement demo tweets); credible voice in both academia and industry; able to translate complex technical milestones into compelling short updates.","weakness":"Can come off as overly confident or polarizing when debating peers; intense focus on engineering milestones may under-communicate broader safety/ethics context; occasional shorthand or inside-baseball posts can lose non-expert audiences.","roast":"Julian treats āprototypeā like a weekend hobby projectāhis idea of casual tweeting is casually teasing a robot that can fetch snacks while you argue with Yann on ML theory. Heās the kind of guy who sleeps with a torque wrench under his pillow and calls it R&D.","win":"Shipped the first production-scale vision DNN at Google that automated Street View tasks, and now is leading Teslaās Optimus AI toward production-ready humanoidsāturning a research pipeline into real-world robotics demonstrations and a prototype production line.","recommendation":"On X: post short threaded breakdowns of each demo (problem ā approach ā failure modes ā next steps), more behind-the-scenes clips from the production line, regular AMAs or Spaces to tackle critics live, and bite-sized explainers that translate technical metrics into everyday impact. Pin a highlight thread, use subtitles on demo videos, and engage top researchers and journalists in replies to amplify reach."},"created":1774575699207,"type":"the innovator","id":"julianibarz"},{"user":{"id":"702654540387127296","name":"Hayden Adams š¦","description":"Invented the Uniswap protocol, Founder @Uniswap","followers_count":443136,"friends_count":642,"statuses_count":7492,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1988767491254857728/ISVZrouR_normal.jpg","screen_name":"haydenzadams","location":"Ethereum","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"uniswap.org","expanded_url":"https://uniswap.org/","indices":[0,23],"url":"https://t.co/AjUjSA2CZn"}]}}},"details":{"type":"The Innovator","description":"Hayden Adams š¦ is the builder who turned a whiteboard experiment into Uniswap ā a permissionless marketplace thatās processed over $2 trillion in volume. He blends technical rigor, product obsession, and unapologetic advocacy for DeFi. Expect sharp takes, viral threads, and a founder ready to defend the future of onāchain finance.","purpose":"To democratize finance by building open, transparent, permissionless market infrastructure and to defend the right to innovate so others can build without gatekeepers. He wants to enable real economic access for anyone with an internet connection and prove that decentralized protocols can be safer and fairer than legacy systems.","beliefs":"Strong faith in open-source, transparency, and onāchain price discovery; skepticism of opaque legacy institutions; conviction that technology and community should protect consumers more effectively than closed incumbents; belief that builders should stay and fight for regulatory clarity rather than hide.","facts":"Fun fact: the Uniswap protocol Hayden invented has processed over $2 trillion in volume. Other highlights: founder of Uniswap Labs in NYC, ~443,136 followers, ~7,492 tweets, and once received a Wells notice from the SEC (and had his bank account closed by a major bank).","strength":"Inventive product vision, deep technical chops, credibility with builders, excellent storyteller when launching concepts, and an ability to rally a passionate community and ecosystem around open infrastructure.","weakness":"Can be polarizing and blunt ā which energizes supporters but escalates regulatory and PR battles; high personal and legal risk from public stances; sometimes public frustration may overshadow educational messaging.","roast":"You decentralized liquidity pools but somehow centralized every drama into your mentions ā your notifications must be the crypto equivalent of a flash crash with better branding.","win":"Built Uniswap from an experiment into a global protocol that enabled trillions in onāchain volume, spawned thousands of forks and projects, and redefined how markets can operate without gatekeepers.","recommendation":"On X, lean into education + narrative: pin a rolling legal/product update thread for followers, post short explainer threads with simple visuals and onāchain metrics, host regular Spaces Q&A with builders, spotlight projects and community wins built on Uniswap, repurpose longer essays into multiātweet threads, engage top commenters to convert critics into collaborators, and balance fiery advocacy with clear, accessible guides so newcomers can onboard and stick around."},"created":1774575199651,"type":"the innovator","id":"haydenzadams"},{"user":{"id":"20938766","name":"Gavin Nelson","description":"Interaction designer @OpenAI","followers_count":65012,"friends_count":830,"statuses_count":6283,"profile_image_url_https":"https://pbs.twimg.com/profile_images/1910772072546193408/yFnZH7wE_normal.jpg","screen_name":"Gavmn","location":"Silicon Valley","entities":{"description":{"urls":[]},"url":{"urls":[{"display_url":"nelson.co","expanded_url":"https://nelson.co","indices":[0,23],"url":"https://t.co/IxojA6whZs"}]}}},"details":{"type":"The Innovator","description":"Gavin Nelson is an interaction designer at OpenAI who experiments at the intersection of playfulness and utility ā shipping micro-experiments that make interfaces feel delightfully inevitable. His timeline is equal parts elegant mockups, clever constraints-hacks, and approachable design math. The result: ideas that spread fast and make other designers steal them (politely).","purpose":"To push everyday interfaces forward by prototyping bold but usable interaction ideas that others can learn from and iterate on ā turning playful experiments into practical patterns used across products.","beliefs":"Design should be joyful, legible, and brave: constraints are opportunities, micro-details matter, and sharing process accelerates better products. He values clarity over noise, thoughtful friction over lazy convenience, and believes elegant interactions can make complex systems feel human.","facts":"Fun fact: Gavin has ~65,012 followers and a habit of making tiny UI experiments go viral ā one 'Time Machine' message interface post reached ~570k views and another parallax icon post hit ~427k views. Heās tweeted 6,283 times and frequently pairs visual demos with simple formulas.","strength":"Exceptional at turning a sketch or constraint into a clear, shareable idea ā strong visual craft, concise explanations (even math when needed), and an ability to spark rapid community discussion and remixing.","weakness":"Can be so gleefully experimental that follow-through or documentation lags ā ideas sometimes land as tasty one-off tweets rather than repeatable case studies, and the niche technical dives can alienate casual followers.","roast":"Gavin will remove your app labels to achieve 'purity' and call it progress ā then tweet a five-step formula to justify why youāre now guessing icons like itās a puzzle game. Charming chaos, but good luck finding the Settings.","win":"His 'Time Machine'-like iMessage history UI tweet went massively viral (~570k views, 7.3k likes), turning a playful concept into a widely discussed interaction pattern and cementing him as a go-to voice for bold micro-UX ideas.","recommendation":"Turn experiments into repeatable content: publish short multi-tweet breakdowns showing problem ā constraint ā decision ā prototype. Pin a āDesign Experimentsā thread linking demos, process files, and templates. Post short screen-record videos or GIFs of interactions, host occasional Spaces to talk through trade-offs, collaborate with product folks for case studies, and end tweets with one clear call-to-action (download, try, remix, or follow). Use consistent hashtags (#DesignExperiments, #MicroUX) and reply threads to capture and grow the conversation."},"created":1774574887575,"type":"the innovator","id":"gavmn"}],"activities":{"nreplies":[],"nbookmarks":[],"nretweets":[],"nlikes":[],"nviews":[]},"interactions":null}},"settings":{},"session":null,"routeProps":{"/creators/:username":{}}}