Get live statistics and analysis of Jeffrey Emanuel's profile on X / Twitter

Former Quant Investor, now building @lumera (formerly called Pastel Network) | My Open Source Projects: github.com/Dicklesworthst…

11k following26k followers

The Innovator

Jeffrey Emanuel is a brilliant mind transitioning from quant investing to groundbreaking tech development at Lumera. He leverages cutting-edge AI and coding tools to tackle complex projects, showcasing a passion for making advanced knowledge accessible and practical. A relentless explorer in the intersection of technology, programming, and history, he’s always pushing the boundaries of innovation.

Impressions
2.6M-126.4k
$490.99
Likes
16.9k3.2k
60%
Retweets
856217
3%
Replies
94740
3%
Bookmarks
9.3k-290
33%

Top users who interacted with Jeffrey Emanuel over the last 14 days

@steipete

Polyagentmorous. Came back from retirement to mess with AI. Enjoyin' Twitter but no time to read it all? Get @sweetistics

4 interactions
@raw_works

building tools for builders | founder @polySpectra | cofounder @cyprismaterials | cohort 1 @activatefellows @berkeleylab | PhD @caltech | AB @princeton | #rwri

3 interactions
@leerob

Teaching developers @cursor_ai, previously @vercel

2 interactions
@Yuchenj_UW

Co-founder & CTO @hyperbolic_labs cooking fun AI systems. Prev: OctoAI (acquired by @nvidia) building Apache TVM, PhD @ University of Washington.

2 interactions
2 interactions
@davidjglassMD

Book: Experimental Design for Biologists. Journal: Skeletal Muscle. Focus areas: Aging; Muscle. Works for a biotech company; views are mine.

2 interactions
@devmuradahmed

Full Stack Dev | .NET | Angular | Building AI-powered Second Brain | Where your thoughts find the perfect place | thinkncache.me

2 interactions
@metapog

𓂀 𓁿 𓁬𓁵 𓁿 𓂀 𓁟

2 interactions
2 interactions
@oyacaro

spiritual ai guy i guess

2 interactions
@codewithimanshu

Daily posts on AI , Tech, Programing, Tools, Jobs, and Trends | 500k+ (LinkedIn, IG, X) Collabs- abrojackhimanshu@gmail.com

2 interactions
@JauquetW

Reading more, listening to audiobooks, trying to parent better

1 interactions
@leodoan_

software engineer. crafting impactful things to open source world | building overwrite: mnismt.com/overwrite | changelogs: changelogs.directory

1 interactions
@LukaFinzgar

Mobile developer and technology enthusiast, hippie in spare time.

1 interactions
@verioussmith

@StartempireWire Relaunch - EO Q4 2025 🚀 ▫️💻 Currently : @asapdigest & @philoveracity ▫️🚀 Relaunching: @startempirewire ▫️ 📢 Previously: @wordcamprs

1 interactions
@Lon

Absurdist intern. Exquisite shitpoasting. High-school dropout + teenage dad. Failed angel investor. EP on Gary Busey film. SIGMOD winner. Shipped infra you use.

1 interactions
@kirjd

Indiehacker trying to earn the first dollar. Building Speakmac - fully offline and fast speech to text for Mac - buy once, use forever

1 interactions
@lamn87

enjoy life

1 interactions
@guitchounts

AI + biology venture creation @FlagshipPioneer | prev: neuroscientist @Harvard | Tweets and X's don't represent my employer

1 interactions
@rwdaigle

Latently exploring the space at @GlideApps. Alumni of @Heroku & @Spreedly. CRE and SMB acquisition on the side.

1 interactions

Jeffrey’s follower list might be bigger if he tweeted less like a highly caffeinated coder buried in a labyrinth of markdown and swarming AI agents, and more like someone sharing the occasional meme — but hey, not everyone can be the life of the digital party!

Successfully orchestrating a swarm of AI sub-agents to not only decode and reformat Kissinger’s thesis but also create an enhanced, digitally indexed, and fully source-linked version — demonstrating masterful command over AI and programming creativity.

To revolutionize how complex information and technology intersect, making cutting-edge research and programming tools more accessible and effective through innovative approaches and open-source contributions.

Values data-driven insights, open knowledge sharing, and the transformative power of technology as a way to solve intricate problems. Believes in pushing technical boundaries while maintaining intellectual rigor and accuracy.

Exceptional ability to combine technical expertise, strategic thinking, and AI collaboration to create novel solutions with efficiency and scale. Highly skilled at automating laborious tasks and integrating multidisciplinary tools to enhance productivity.

Can sometimes get deeply absorbed in highly technical or niche aspects that may alienate broader audiences, and might overlook simpler communication approaches in favor of heavy detail.

To grow his audience on X, Jeffrey should create more bite-sized explainer threads simplifying his projects and insights, while engaging directly with communities interested in AI, programming, and tech innovation. Leveraging visual snippets and periodic AMA sessions can turn his deep expertise into a magnet for curious followers.

Jeffrey turned Henry Kissinger’s unwieldy 400-page thesis into a beautifully navigable digital masterpiece using AI-powered agents—a premier way to consume a historic academic work today!

Top tweets of Jeffrey Emanuel

I wanted to read Henry Kissinger’s 400 page undergraduate thesis (it has an incredible first page), but really didn’t feel like dealing with a scanned PDF that’s annoying to read on a phone without constantly zooming and panning. So I decided to convert it to a nice markdown format using OCR and LLMs. Then I thought it would be nice to fix the footnotes and get rid of the page breaks and to fix the line breaks and other things like that. I was already working on some other coding projects, so I had the idea of loading up the draft markdown file in Claude Code and having it work on fixing these issues using a swarm of 20 sub-agents, which worked well. Then I thought it would be cool to link to the full sources for all the many references on sites like the Internet Archive or Project Gutenberg, so I had another swarm of sub-agents do a ton of searches to track the links down and insert them into the footnotes and bibliography. Then I figured that I might as well run it through my mind-map generator and summarization code to see what it comes up with, so I tried that. But now I had a few files to present, so needed some kind of index page. So I asked Codex with GPT-5 to whip up a slick looking web page to present the stuff nicely, which it did a yeoman’s job with. Note that I was already working with these tools in a bunch of other sessions on other projects, so my work here was occasionally giving some instructions to the coding agents and letting them crank away. I really didn’t spend much active time on this! Anyway, the net result is clearly the premier way in the world today to consume Henry Kissinger’s undergraduate thesis electronically. I’ll post the link in the next tweet to avoid getting punished by the algorithm. As for the thesis itself, it’s wild how erudite he was as a young man, and also what a great writer he was. And even more impressive considering that English was his second language. The thesis is basically him trying to come to grips with, and to mentally organize in an internally consistent way, a vast swath of Western thought. From what I’ve read so far, I think he did a pretty good job. Incidentally, his thesis is the reason Harvard changes the rules to limit the undergrad honors thesis to a maximum of 35,000 words. Good thing they didn’t apply this silly limit to Henry!

546k

So Python 3.14 finally came out for real yesterday. Finally removing the GIL (global interpreter lock), which allows for way faster multithreaded code without dealing with all the brain damage and overhead of multiprocessing or other hacky workarounds. And uv already fully supports it, which is wildly impressive. But anyway, I was a bit bummed out, because the main project I’m working on has a massive number of library dependencies, and it always takes a very long time to get mainline support for new python versions, particularly when they’re as revolutionary and different as version 3.14 is. So I was resigned to endure GIL-hell for the indefinite future. But then I figured, why not? Let me just see if codex and GPT-5 can power through it all. So I backed up my settings and asked codex to try, giving it the recent blog post from the uv team to get it started. There were some major roadblocks. I use PyTorch, which is notoriously slow to update. And also pyarrow, which also didn’t support 3.14. Same with cvxpy, the wrapper to the convex optimization library. Still, I wanted to see what we could do even if we had to deal with the brain damage of “vendoring” some libraries and building some stuff from scratch in C++, Rust, etc. using the latest nightly GitHub repositories instead of the usual PyPi libraries. I told codex to search the web, to read GitHub issue pages, etc, so that we didn’t reinvent the wheel (or WHL I should say, 🤣) unnecessarily. Why not? I could always test things, and if I couldn’t get it to work, then I could just retreat back to Python 3.13, right? No harm, no foul. Well, it took many hours of work, almost all of it done by codex while I occasionally checked in with it, but it managed to get everything working! Sure, it took a bunch of iterations, and I had to go tweak some stuff to avoid annoying deprecation warnings (some of which come from other libraries, so I ultimately had to filter them). But those libraries will update over time to better support 3.14 and eventually I won’t need to use any of these annoying workarounds. Codex even suggested uploading the compiled whl artifacts to Cloudflare’s R2 (like s3) so we could reuse them easily across machines, and took care of all the details for me. I would never think to do that on my own. Every time there was another complication or problem (for instance, what is shown in the screenshot below), codex just figured it out and plowed through it all like nothing. If you’ve never tried to do something like this in the “bad old days” prior to LLMs, it was a thankless grind that could eat up days and then hit a roadblock, resulting in a total wipeout. So it was simply too risky to even try it most of the time; you were better off just waiting 6 or 9 months for things to become simple again. Anyway, I still can’t really believe it’s all working! We are living in the future.

233k

DeepSeek just released a pretty shocking new paper. They really buried the lede here by referring to it simply as DeepSeek OCR. While it’s a very strong OCR model, the purpose of it and the implications of their approach go far beyond what you’d expect of “yet another OCR model.” Traditionally, vision LLM tokens almost seemed like an afterthought or “bolt on” to the LLM paradigm. And 10k words of English would take up far more space in a multimodal LLM when expressed as intelligible pixels than when expressed as tokens. So those 10k words may have turned into 15k tokens, or 30k to 60k “visual tokens.” So vision tokens were way less efficient and really only made sense to use for data that couldn’t be effectively conveyed with words. But that gets inverted now from the ideas in this paper. DeepSeek figured out how to get 10x better compression using vision tokens than with text tokens! So you could theoretically store those 10k words in just 1,500 of their special compressed visual tokens. This might not be as unexpected as it sounds if you think of how your own mind works. After all, I know that when I’m looking for a part of a book that I’ve already read, I imagine it visually and always remember which side of the book it was on and approximately where on the page it was, which suggests some kind of visual memory representation at work. Now, it’s not clear how exactly this interacts with the other downstream cognitive functioning of an LLM; can the model reason as intelligently over those compressed visual tokens as it can using regular text tokens? Does it make the model less articulate by forcing it into a more vision-oriented modality? But you can imagine that, depending on the exact tradeoffs, it could be a very exciting new axis to greatly expand effective context sizes. Especially when combined with DeepSeek’s other recent paper from a couple weeks ago about sparse attention. For all we know, Google could have already figured out something like this, which could explain why Gemini has such a huge context size and is so good and fast at OCR tasks. If they did, they probably wouldn’t say because it would be viewed as an important trade secret. But the nice thing about DeepSeek is that they’ve made the entire thing open source and open weights and explained how they did it, so now everyone can try it out and explore. Even if these tricks make attention more lossy, the potential of getting a frontier LLM with a 10 or 20 million token context window is pretty exciting. You could basically cram all of a company’s key internal documents into a prompt preamble and cache this with OpenAI and then just add your specific query or prompt on top of that and not have to deal with search tools and still have it be fast and cost-effective. Or put an entire code base into the context and cache it, and then just keep appending the equivalent of the git diffs as you make changes to the code. If you’ve ever read stories about the great physicist Hans Bethe, he was known for having vast amounts of random physical facts memorized (like the entire periodic table; boiling points of various substances, etc.) so that he could seamlessly think and compute without ever having to interrupt his flow to look something up in a reference table. Having vast amounts of task-specific knowledge in your working memory is extremely useful. This seems like a very clever and additive approach to potentially expanding that memory bank by 10x or more.

200k

I finally got around to making a tool I've wanted for a long time: you can basically think of it as being "like Gmail for coding agents." If you've ever tried to use a bunch of instances of Claude Code or Codex at once across the same project, you've probably noticed how annoying it can be when they freak out about the other agent changing the files they're working on. Then they start doing annoying things, like restoring files from git, in the process wiping out another agent's work without a backup. Or if you've tried to have agents coordinate on two separate repos, like a Python backend and a Nextjs frontend for the same project, you may have found yourself acting as the go-between and liaison between two or three different agents, passing messages between them or having them communicate by means of markdown files or some other workaround. I always knew there had to be a better way. But it's hard to get the big providers to offer something like that in a way that's universal, because Anthropic doesn't want to integrate with OpenAI's competitive coding tool, and neither wants to deal with Cursor or Gemini-CLI. So a few days ago, I started working on it, and it's now ready to share with the world. Introducing the 100% open-source MCP Agent Mail tool. This can be set up very quickly and easily on your machine and automatically detects all the most common coding agents and configures everything for you. I also include a ready-made blurb (see the README file in the repo, link in the next tweet) that you can add to your existing AGENTS dot md or CLAUDE dot md file to help the agents better leverage the system straight out of the gate. It's almost comical how quickly the agents take to this system like a fish to water. They seem to relish in it, sending very detailed messages to each other just like humans do, and start coordinating in a natural, powerful way. They even give each other good ideas and pushback on bad ideas. They can also reserve access to certain files to avoid the "too many cooks" problems associated with having too many agents all working on the same project at the same time, all without dealing with git worktrees and "merge hell." This also introduces a natural and powerful way to do something I've also long wanted, which is to automatically have multiple different frontier models working together in a collaborative, complementary way without me needing to be in the middle coordinating everything like a parent setting up playdates for their kids. And for the human in the loop, I made a really slick web frontend that you can view and see all the messages your agents are sending each other in a nice, Gmail-like interface, so you can monitor the process. You can even send a special message to some or all your agents as the "Human Overseer" to give them a directive (of course, you can also just type that in manually into each coding agent, too.) I made this for myself and know that I'm going to be getting a ton of usage out of it going forward. It really lets you unleash a massive number of agents using a bunch of different tools/models, and they just naturally coordinate and work with each other without stepping on each other's toes. It lets you as the human overseer relax a bit more as you no longer have to be the one responsible for coordinating things, and also because the agents watch each other and push back when they see mistakes and errors happening. Obviously, the greater the variety of models and agent tools you use, the more valuable that emergent peer review process will be. Anyway, give it a try and let me know what you think. I'm sure there are a bunch of bugs that I'll have to iron out over the next couple days, but I've already been productively using it today to work on another project and it is pretty amazingly functional already!

115k

Most engaged tweets of Jeffrey Emanuel

I wanted to read Henry Kissinger’s 400 page undergraduate thesis (it has an incredible first page), but really didn’t feel like dealing with a scanned PDF that’s annoying to read on a phone without constantly zooming and panning. So I decided to convert it to a nice markdown format using OCR and LLMs. Then I thought it would be nice to fix the footnotes and get rid of the page breaks and to fix the line breaks and other things like that. I was already working on some other coding projects, so I had the idea of loading up the draft markdown file in Claude Code and having it work on fixing these issues using a swarm of 20 sub-agents, which worked well. Then I thought it would be cool to link to the full sources for all the many references on sites like the Internet Archive or Project Gutenberg, so I had another swarm of sub-agents do a ton of searches to track the links down and insert them into the footnotes and bibliography. Then I figured that I might as well run it through my mind-map generator and summarization code to see what it comes up with, so I tried that. But now I had a few files to present, so needed some kind of index page. So I asked Codex with GPT-5 to whip up a slick looking web page to present the stuff nicely, which it did a yeoman’s job with. Note that I was already working with these tools in a bunch of other sessions on other projects, so my work here was occasionally giving some instructions to the coding agents and letting them crank away. I really didn’t spend much active time on this! Anyway, the net result is clearly the premier way in the world today to consume Henry Kissinger’s undergraduate thesis electronically. I’ll post the link in the next tweet to avoid getting punished by the algorithm. As for the thesis itself, it’s wild how erudite he was as a young man, and also what a great writer he was. And even more impressive considering that English was his second language. The thesis is basically him trying to come to grips with, and to mentally organize in an internally consistent way, a vast swath of Western thought. From what I’ve read so far, I think he did a pretty good job. Incidentally, his thesis is the reason Harvard changes the rules to limit the undergrad honors thesis to a maximum of 35,000 words. Good thing they didn’t apply this silly limit to Henry!

546k

I finally got around to making a tool I've wanted for a long time: you can basically think of it as being "like Gmail for coding agents." If you've ever tried to use a bunch of instances of Claude Code or Codex at once across the same project, you've probably noticed how annoying it can be when they freak out about the other agent changing the files they're working on. Then they start doing annoying things, like restoring files from git, in the process wiping out another agent's work without a backup. Or if you've tried to have agents coordinate on two separate repos, like a Python backend and a Nextjs frontend for the same project, you may have found yourself acting as the go-between and liaison between two or three different agents, passing messages between them or having them communicate by means of markdown files or some other workaround. I always knew there had to be a better way. But it's hard to get the big providers to offer something like that in a way that's universal, because Anthropic doesn't want to integrate with OpenAI's competitive coding tool, and neither wants to deal with Cursor or Gemini-CLI. So a few days ago, I started working on it, and it's now ready to share with the world. Introducing the 100% open-source MCP Agent Mail tool. This can be set up very quickly and easily on your machine and automatically detects all the most common coding agents and configures everything for you. I also include a ready-made blurb (see the README file in the repo, link in the next tweet) that you can add to your existing AGENTS dot md or CLAUDE dot md file to help the agents better leverage the system straight out of the gate. It's almost comical how quickly the agents take to this system like a fish to water. They seem to relish in it, sending very detailed messages to each other just like humans do, and start coordinating in a natural, powerful way. They even give each other good ideas and pushback on bad ideas. They can also reserve access to certain files to avoid the "too many cooks" problems associated with having too many agents all working on the same project at the same time, all without dealing with git worktrees and "merge hell." This also introduces a natural and powerful way to do something I've also long wanted, which is to automatically have multiple different frontier models working together in a collaborative, complementary way without me needing to be in the middle coordinating everything like a parent setting up playdates for their kids. And for the human in the loop, I made a really slick web frontend that you can view and see all the messages your agents are sending each other in a nice, Gmail-like interface, so you can monitor the process. You can even send a special message to some or all your agents as the "Human Overseer" to give them a directive (of course, you can also just type that in manually into each coding agent, too.) I made this for myself and know that I'm going to be getting a ton of usage out of it going forward. It really lets you unleash a massive number of agents using a bunch of different tools/models, and they just naturally coordinate and work with each other without stepping on each other's toes. It lets you as the human overseer relax a bit more as you no longer have to be the one responsible for coordinating things, and also because the agents watch each other and push back when they see mistakes and errors happening. Obviously, the greater the variety of models and agent tools you use, the more valuable that emergent peer review process will be. Anyway, give it a try and let me know what you think. I'm sure there are a bunch of bugs that I'll have to iron out over the next couple days, but I've already been productively using it today to work on another project and it is pretty amazingly functional already!

115k

So Python 3.14 finally came out for real yesterday. Finally removing the GIL (global interpreter lock), which allows for way faster multithreaded code without dealing with all the brain damage and overhead of multiprocessing or other hacky workarounds. And uv already fully supports it, which is wildly impressive. But anyway, I was a bit bummed out, because the main project I’m working on has a massive number of library dependencies, and it always takes a very long time to get mainline support for new python versions, particularly when they’re as revolutionary and different as version 3.14 is. So I was resigned to endure GIL-hell for the indefinite future. But then I figured, why not? Let me just see if codex and GPT-5 can power through it all. So I backed up my settings and asked codex to try, giving it the recent blog post from the uv team to get it started. There were some major roadblocks. I use PyTorch, which is notoriously slow to update. And also pyarrow, which also didn’t support 3.14. Same with cvxpy, the wrapper to the convex optimization library. Still, I wanted to see what we could do even if we had to deal with the brain damage of “vendoring” some libraries and building some stuff from scratch in C++, Rust, etc. using the latest nightly GitHub repositories instead of the usual PyPi libraries. I told codex to search the web, to read GitHub issue pages, etc, so that we didn’t reinvent the wheel (or WHL I should say, 🤣) unnecessarily. Why not? I could always test things, and if I couldn’t get it to work, then I could just retreat back to Python 3.13, right? No harm, no foul. Well, it took many hours of work, almost all of it done by codex while I occasionally checked in with it, but it managed to get everything working! Sure, it took a bunch of iterations, and I had to go tweak some stuff to avoid annoying deprecation warnings (some of which come from other libraries, so I ultimately had to filter them). But those libraries will update over time to better support 3.14 and eventually I won’t need to use any of these annoying workarounds. Codex even suggested uploading the compiled whl artifacts to Cloudflare’s R2 (like s3) so we could reuse them easily across machines, and took care of all the details for me. I would never think to do that on my own. Every time there was another complication or problem (for instance, what is shown in the screenshot below), codex just figured it out and plowed through it all like nothing. If you’ve never tried to do something like this in the “bad old days” prior to LLMs, it was a thankless grind that could eat up days and then hit a roadblock, resulting in a total wipeout. So it was simply too risky to even try it most of the time; you were better off just waiting 6 or 9 months for things to become simple again. Anyway, I still can’t really believe it’s all working! We are living in the future.

233k

People with Innovator archetype

The Innovator

I have made entire HFT systems from nothing

1k following3k followers
The Innovator

Shitposting Jr I aped every coin I call, DYOR/NFA

905 following1k followers
The Innovator

AI Educator. Web Developer, Web Designer, #AIforGood Advocate.

1k following1k followers
The Innovator

Founding Design Engineer @mail0dotcom

1k following6k followers
The Innovator

👑 AI coding - 职业工程师 📊 Visualization - AI可视化工具,AI文生图领域探索者 💻 Prompt Engineering - 提示词爱好者 曾经的AI命令行编程工具aider简中第一吹。 正在研究并行编程

950 following8k followers
The Innovator

Making things on the web since the dial-up days. Values: truth, curiosity, and improvement. ❤️ @sarcasmically (Jaquith is pronounced JAKE-with)

1k following23k followers
The Innovator

ValidatorVN delivers high-performance, security-focused validator services with a strong commitment to supporting proof-of-stake networks.

1k following11k followers
The Innovator

Liminal Thinker, Innovator & Educator Launched: resume.fail Author: @AtomicNoteTakin Building @flowtelic YouTube: youtube.com/@Martin_Adams

2k following4k followers
The Innovator

Exploring AI & Tech Insights 🤖

58 following37 followers
The Innovator

Building @every | Also tools for community creators: curatedconnections.io

342 following1k followers
The Innovator

Easy LLM context for all! ✨pip install attachments Inspired by: ggplot2, DSPy, claudette, dplyr, OpenWebUI! Follow for: API design, AI, and Data 🐍CC📜🛠 maker

816 following5k followers
The Innovator

✨ AI should be about empowering humans, building understanding, and making dreams realities. 👩‍💻 DevX Eng. Lead @GoogleDeepMind ex-@GitHub || views = my own!

1k following70k followers

Explore Related Archetypes

If you enjoy the innovator profiles, you might also like these personality types:

Supercharge your 𝕏 game,
Grow with SuperX!

Get Started for Free