Get live statistics and analysis of Tech with Mak's profile on X / Twitter

AI, coding, software, and whateverโ€™s on my mind.

634 following14k followers

The Analyst

Tech with Mak is a deep-diving tech guru who unravels the complexities of AI, coding, and software development with clarity and precision. Their content is packed with insightful explanations and practical knowledge, perfect for both beginners and seasoned developers. Always ready to turn complex concepts into digestible knowledge nuggets, Mak makes tech approachable and engaging.

Impressions
988.7k-21.3k
$185.32
Likes
14.3k-783
48%
Retweets
2.6k-59
9%
Replies
84719
3%
Bookmarks
12.1k-698
41%

Top users who interacted with Tech with Mak over the last 14 days

@codewithimanshu

Daily posts on AI , Tech, Programing, Tools, Jobs, and Trends | 500k+ (LinkedIn, IG, X) Collabs- abrojackhimanshu@gmail.com

2 interactions
@ajverharen

Award winning AI Innovator with demonstrated solutions that can be adapted for enterprise use. Vision/Video Processing, Project Management, Retail Agents

1 interactions
1 interactions
@aravind3sundar

Exploring AI, agentic systems, and the future of marketing. Breaking down the latest trends so you donโ€™t get left behind. Follow for latest updates!!

1 interactions
@DilshadAI1

AI Influencer | Helping you to make money with AI, Tech Tools & Digital Skills | DM for Exclusive Collaboration ๐Ÿ’Œ dilshadhussain2577@gmail.com

1 interactions
1 interactions
@tmaiaroto

Imagine whatever you were expecting to see in my bio is written right here. Fair warning: I make jokes and they might be bad.

1 interactions
@vicky_grok

50K+ Audience on LinkedIn | AI Engineer | Resume Writer | AI Content Creator | AI Enthusiast | Influencer | DM for Promotion

1 interactions
1 interactions
@AIWorkflowsLab

Stop working harder. Automate smarter โšก AI tools โ€ข Workflow systems โ€ข Time freedom Free AI toolkit โ†“

1 interactions
1 interactions
1 interactions
@Adilson_Ai

AI Educator. Helping you to make money with Al, Tech Tools & Digital Skills | DM for Collaboration โœจ | corpmarkg@gmail.com โœ‰๏ธ

1 interactions
1 interactions
1 interactions
@Manifeast56

AI Educator | Empowering you to generate income using AI tools and Digital. DM /Email Open for collaboration ๐Ÿ“ฉ

1 interactions
@suhrabautomates

AI & Automation Technologist bringing Efficiency to the World ๐Ÿš€ Building the Future of AI Organizations ๐Ÿ“ˆ Scaling businesses via intelligent automations

1 interactions
@Tirthhh30

unrivaled acuity combined with relentless tenacity renders me a formidable adversary in every sphere of human pursuit.

1 interactions
@Dilesh2004

๐Ÿค– AI & Tech | ๐Ÿš€ Future of Innovation | Simplifying complex tech for everyday minds | #AI #Tech

1 interactions
1 interactions

Tech with Mak: turning โ€˜too much infoโ€™ into a fine art โ€” if only their tweets came with a TL;DR, even their followers might get a breather between brain cramps and code sprints.

Achieved massive engagement on tweets dissecting state-of-the-art AI concepts and software best practices, with multiple posts surpassing 150k views and thousands of likes and retweets, cementing their place as a trusted tech educator.

To educate and inform a broad audience about the intricacies of AI, software development, and system design, empowering others to build smarter tech solutions and master software engineering best practices.

Mak values accuracy, clarity, and continuous learning. They believe that complex problems become manageable when broken down systematically and presented transparently, and that sharing knowledge drives community growth.

Exceptional ability to analyze and clearly communicate sophisticated technical concepts, combined with consistent, data-backed content that boosts user engagement and trust.

Sometimes the technical depth might overwhelm casual followers who prefer lighter or more varied content; also, being heavily focused on explanations could limit personal storytelling that boosts relatability.

To grow their audience on X, Tech with Mak should complement technical deep dives with bite-sized tips, relatable anecdotes, or quick polls to spark conversation. Engaging more actively through replies and leveraging threads can turn followers into a community.

Mak's tweets often explain technical topics like LLMs, RAG, and Nginx's architecture in remarkable detail, demonstrating a knack for both technical depth and audience-friendly tone.

Top tweets of Tech with Mak

My doctor told me to reduce stress. I replaced Apache with Nginx. ๐Ÿ˜‰ Nginx (pronounced "engine-x") -> - Web Server - Reverse Proxy - Load Balancer โญ NGINX is a reverse proxy load balancer, meaning it manages connections between clients and backend servers. โ—พ It is free & open-source. โ—พ It's renowned for its efficiency, stability and ability to handle massive loads concurrently. ๐”๐ง๐๐ž๐ซ ๐ญ๐ก๐ž ๐ก๐จ๐จ๐ - โ—พ At its heart, Nginx is an event-driven web server. โ—พ It doesn't dedicate a thread to each incoming connection (like traditional models). โ—พ Instead, it relies on a single (or a few) worker processes to manage multiple connections concurrently. โ—พ Nginx utilizes a non-blocking I/O model - - Nginx uses => efficient system calls (like epoll on Linux ) => to watch multiple file descriptors (representing connections, sockets, etc.). - When a connection is ready, Nginx performs the read/write operation asynchronously. - It doesn't wait for the operation to complete. - Instead, it delegates the task to the 'operating system' and immediately moves on to handle other events. - When the asynchronous operation finishes, the operating system triggers a callback function in Nginx. - This callback then processes the data and potentially schedules more asynchronous operations. This enables NGINX to handle thousands of simultaneous connections with minimal resources. ๐Œ๐š๐ฌ๐ญ๐ž๐ซ & ๐–๐จ๐ซ๐ค๐ž๐ซ ๐๐ซ๐จ๐œ๐ž๐ฌ๐ฌ๐ž๐ฌ - โ—พ The master process sets up the infrastructure (like listening sockets) and delegates the actual connection handling and request processing to the worker processes. โ—พ Each worker process then runs its own independent event loop, utilizing the non-blocking I/O model. So, Worker processes does the actual heavy lifting of handling client requests and interacting with backend servers. โ—พ This separation allows Nginx to handle numerous concurrent connections efficiently. Remember, Nginx strikes a balance between both approaches - # Worker Processes (Multi-Process) # Event-Driven Within Each Process (Not Multi-Threaded) (Inside each worker process, the event-driven, non-blocking model comes into play.) ๐Š๐ž๐ฒ ๐…๐ฎ๐ง๐œ๐ญ๐ข๐จ๐ง๐š๐ฅ๐ข๐ญ๐ข๐ž๐ฌ - # Web Serving Nginx serves static content (HTML, CSS, images) efficiently. # Reverse Proxy Nginx forwards requests to backend servers while potentially adding security or caching functionalities. Can handle SSL/TLS termination (encryption/decryption). # Load Balancer (Layer 4 & 7) Nginx can distribute traffic across multiple backend servers using various algorithms (round-robin, least connections, etc.) Improves scalability and high availability. ------ Remember, โ—พ Nginx performance can be impacted by long-running requests that block the event loop. โ—พ For computationally intensive tasks, Nginx may benefit from using threads or offloading work to external processes. ๐Ÿ˜Š Follow - @techNmak , for more :)

74k

What is RAG? What is Agentic RAG? ๐‘๐€๐† (๐‘๐ž๐ญ๐ซ๐ข๐ž๐ฏ๐š๐ฅ-๐€๐ฎ๐ ๐ฆ๐ž๐ง๐ญ๐ž๐ ๐†๐ž๐ง๐ž๐ซ๐š๐ญ๐ข๐จ๐ง) RAG connects a generation model to external knowledge through retrieval. Hereโ€™s how it works - 1./ A user submits a query. 2./ The system searches a pre-indexed set of documents (typically stored in a vector database). 3./ The most relevant chunks are retrieved. 4./ These chunks are appended to the original query. 5./ The combined input is sent for generation. The goal? To provide the model with context so it can generate more accurate, source-aware responses. But in traditional RAG, everything happens in a single pass - no planning, no evaluation, no retrying. ------ ๐€๐ ๐ž๐ง๐ญ๐ข๐œ ๐‘๐€๐† Agentic RAG builds on the same foundation but introduces intelligent agents, each with a specific role - to improve the overall process. Instead of a single static pipeline, Agentic RAG becomes ๐š ๐ฆ๐ฎ๐ฅ๐ญ๐ข-๐ฌ๐ญ๐ž๐ฉ, ๐š๐๐š๐ฉ๐ญ๐ข๐ฏ๐ž ๐ฌ๐ฒ๐ฌ๐ญ๐ž๐ฆ. Here's what it typically includes - 1./ A Planning Agent that breaks down the user query and decides what needs to be retrieved. 2./ A Retrieval Agent that reformulates the query (if needed) and pulls in information, not just from documents, but potentially from APIs, tools, or dynamic sources. 3./ A Generator Agent that constructs a response using the retrieved data. 4./ A Judge or Evaluation Agent that reviews the output. If itโ€™s not good enough, the system can refine its plan or regenerate the answer. This setup allows for - - Iterative reasoning - Self-correction - Tool integration - Dynamic, context-aware responses ------------ Follow @techNmak

75k

My doctor told me to reduce stress. I replaced Apache with Nginx. ๐Ÿ˜‰ Nginx (pronounced "engine-x") -> - Web Server - Reverse Proxy - Load Balancer โญ NGINX is a reverse proxy load balancer, meaning it manages connections between clients and backend servers. โ—พ It is free & open-source. โ—พ It's renowned for its efficiency, stability and ability to handle massive loads concurrently. ๐”๐ง๐๐ž๐ซ ๐ญ๐ก๐ž ๐ก๐จ๐จ๐ย - โ—พ At its heart, Nginx is an event-driven web server. โ—พ It doesn't dedicate a thread to each incoming connection (like traditional models). โ—พ Instead, it relies on a single (or a few) worker processes to manage multiple connections concurrently. โ—พ Nginx utilizes a non-blocking I/O model - - Nginx uses => efficient system calls (like epoll on Linux ) => to watch multiple file descriptors (representing connections, sockets, etc.). - When a connection is ready, Nginx performs the read/write operation asynchronously. - It doesn't wait for the operation to complete. - Instead, it delegates the task to the 'operating system' and immediately moves on to handle other events. - When the asynchronous operation finishes, the operating system triggers a callback function in Nginx. - This callback then processes the data and potentially schedules more asynchronous operations. This enables NGINX to handle thousands of simultaneous connections with minimal resources. ๐Œ๐š๐ฌ๐ญ๐ž๐ซ & ๐–๐จ๐ซ๐ค๐ž๐ซ ๐๐ซ๐จ๐œ๐ž๐ฌ๐ฌ๐ž๐ฌ - โ—พ The master process sets up the infrastructure (like listening sockets) and delegates the actual connection handling and request processing to the worker processes. โ—พ Each worker process then runs its own independent event loop, utilizing the non-blocking I/O model. So, Worker processes does the actual heavy lifting of handling client requests and interacting with backend servers. โ—พ This separation allows Nginx to handle numerous concurrent connections efficiently. Remember, Nginx strikes a balance between both approaches - # Worker Processes (Multi-Process) # Event-Driven Within Each Process (Not Multi-Threaded) (Inside each worker process, the event-driven, non-blocking model comes into play.) ๐Š๐ž๐ฒ ๐…๐ฎ๐ง๐œ๐ญ๐ข๐จ๐ง๐š๐ฅ๐ข๐ญ๐ข๐ž๐ฌย - # Web Serving Nginx serves static content (HTML, CSS, images) efficiently. # Reverse Proxy Nginx forwards requests to backend servers while potentially adding security or caching functionalities. Can handle SSL/TLS termination (encryption/decryption). # Load Balancer (Layer 4 & 7) Nginx can distribute traffic across multiple backend servers using various algorithms (round-robin, least connections, etc.) Improves scalability and high availability. ------ Remember, โ—พ Nginx performance can be impacted by long-running requests that block the event loop. โ—พ For computationally intensive tasks, Nginx may benefit from using threads or offloading work to external processes. ๐Ÿ˜Š Follow - @techNmak , for more :)

66k

๐Ÿš€ RAG has evolved far beyond its original form. When people hear Retrieval-Augmented Generation (RAG), they often think of the classic setup: retrieve documents โ†’ feed into LLM โ†’ generate an answer. But in practice, RAG has branched into many specialized patterns, each designed to solve different challenges around accuracy, latency, compliance, and context. Here are some of the most important categories: โžค Standard RAG - the original retrieval + generation (RAG-Sequence, RAG-Token). โžค Graph RAG - connects LLMs with knowledge graphs for structured reasoning. โžค Memory-Augmented RAG - external memory for long-term context. โžค Multi-Modal RAG - retrieves across text, images, audio, video. โžค Streaming RAG - real-time retrieval for live data (tickers, logs). โžค ODQA RAG - open-domain QA, one of the earliest and most popular uses. โžค Domain-Specific RAG - tailored retrieval for legal, healthcare, or finance. โžค Hybrid RAG - combines dense + sparse retrieval for higher recall. โžค Self-RAG - lets the model reflect and refine before final output (Meta AI, 2023). โžค HyDE (Hypothetical Document Embeddings) - improves retrieval by first generating โ€œmockโ€ documents to embed. โžค Recursive / Multi-Step RAG - multi-hop retrieval + reasoning chains. Others like Agentic RAG, Modular RAG, Knowledge-Enhanced RAG, Contextual RAG are best thought of as system design patterns, not strict categories, but useful extensions for specific use cases. ๐Ÿ“Š The image below maps out 16 different types of RAG, their features, benefits, applications, and tooling examples. ๐Ÿ‘‰ Whether youโ€™re building production-grade assistants, domain-specific copilots, or real-time monitoring systems, the right flavor of RAG can make all the difference. ---------- Follow @techNmak

77k

Most engaged tweets of Tech with Mak

I am a Senior Engineer with over 11+ years of experience (77K+ on LinkedIn) and would love to #connect with people who are interested in: - Software Engineering - Data Engineering - Content Creation - Frontend - Backend - Full-stack - MobileDevelopment - GoogleCloud - Azure - AWS - AI & Machine Learning - ReactJS/NextJS - OpenSource - UI/UX - Freelancing #letsconnect #buildinpublic

5k

I want to #connect with people who are interested in: - Software Engineering - Data Engineering - Content Creation - Frontend - Backend - Full-stack - MobileDevelopment - GoogleCloud - Azure - AWS - AI & Machine Learning - ReactJS/NextJS - OpenSource - UI/UX - Freelancing #letsconnect #buildinpublic

3k

๐Ÿšจ Breaking: This company is about to create a billion developers. @lovable_dev is changing how software gets built, again. People are building โ€˜000 $ worth of software in minutes. These are some of the best creations with Lovableโ€™s new AI Agent ๐Ÿ‘‡

7k

The Python ecosystem has a new standard, and it's called uv. From the creators of ruff, uv is an all-in-one "Cargo for Python" written in Rust, and it is a massive leap forward. Itโ€™s not just "a faster pip." Itโ€™s a single, cohesive tool designed to replace an entire collection of tools: pip, pip-tools, venv, virtualenv, pipx, bump2version, build, and twine. The performance gains are staggering (often 10-100x faster), but the real insight is the unified workflow. Here is the overview of what makes uv a true game-changer for any Python developer. 1./ The complete project lifecycle This is what sets uv apart. It's not just an installer; it handles your entire workflow from start to finish. You can now use one tool to - โ˜› Initialize: uv init โ˜› Manage dependencies: uv add, uv rm, uv sync โ˜› Bump versions: uv version --bump patch โ˜› Build: uv build โ˜› Publish: uv publish This unified lifecycle is a massive boost to developer experience. 2./ Beyond-project: tools & scripts uv also replaces specialized tools like pipx and pip-run: โ˜› Tool Management: uv tool install ruff installs ruff into its own isolated, managed environment. No more global site-package pollution. โ˜› Script Running: uv run myscript(.)py can execute a script, read its dependencies from inline comments (# uv: requests), and run it in a temporary, on-the-fly environment. 3./ Core speed & sanity At its heart, uv is a blazing-fast resolver and installer. This is all thanks to: โ˜› Rust Core: Native, parallelized operations. โ˜› Global Cache: Dependencies are shared across all your projects, saving gigabytes of disk space and making new environment creation almost instant. โ˜› Python Management: uv python install 3.12 provides a built-in, simple way to fetch and manage Python versions. uv is one of the most significant advancements in Python tooling in the last decade. It simplifies our stack, saves us time, and brings a level of cohesion to the ecosystem we've long needed. ------- Follow @techNmak for more such insights. Check this cheatsheet by rodrigo girรฃo serrรฃo

81k

What is RAG? What is Agentic RAG? ๐‘๐€๐† (๐‘๐ž๐ญ๐ซ๐ข๐ž๐ฏ๐š๐ฅ-๐€๐ฎ๐ ๐ฆ๐ž๐ง๐ญ๐ž๐ ๐†๐ž๐ง๐ž๐ซ๐š๐ญ๐ข๐จ๐ง) RAG connects a generation model to external knowledge through retrieval. Hereโ€™s how it works - 1./ A user submits a query. 2./ The system searches a pre-indexed set of documents (typically stored in a vector database). 3./ The most relevant chunks are retrieved. 4./ These chunks are appended to the original query. 5./ The combined input is sent for generation. The goal? To provide the model with context so it can generate more accurate, source-aware responses. But in traditional RAG, everything happens in a single pass - no planning, no evaluation, no retrying. ------ ๐€๐ ๐ž๐ง๐ญ๐ข๐œ ๐‘๐€๐† Agentic RAG builds on the same foundation but introduces intelligent agents, each with a specific role - to improve the overall process. Instead of a single static pipeline, Agentic RAG becomes ๐š ๐ฆ๐ฎ๐ฅ๐ญ๐ข-๐ฌ๐ญ๐ž๐ฉ, ๐š๐๐š๐ฉ๐ญ๐ข๐ฏ๐ž ๐ฌ๐ฒ๐ฌ๐ญ๐ž๐ฆ. Here's what it typically includes - 1./ A Planning Agent that breaks down the user query and decides what needs to be retrieved. 2./ A Retrieval Agent that reformulates the query (if needed) and pulls in information, not just from documents, but potentially from APIs, tools, or dynamic sources. 3./ A Generator Agent that constructs a response using the retrieved data. 4./ A Judge or Evaluation Agent that reviews the output. If itโ€™s not good enough, the system can refine its plan or regenerate the answer. This setup allows for - - Iterative reasoning - Self-correction - Tool integration - Dynamic, context-aware responses ------------ Follow @techNmak

75k

Software is changing. So is debugging. Before: โ‡จ Print statements everywhere โ‡จ Hoping the error shows up โ‡จ Restarting the app 15 times โ‡จ Fixing one thing, breaking another โ‡จ Forgetting what you changed โ‡จ Giving up and rewriting the whole function Then: โ‡จ Real debuggers โ‡จ git bisect โ‡จ Restarting the app 15 times โ‡จ Log inspection โ‡จ Writing a testโ€ฆ maybe โ‡จ Pair debugging with a tired teammate But you were still doing most of the work. Now: You make a request. Lovable (@lovable_dev) Agent does the rest - โ‡จ Reads the right files โ‡จ Traces the logic โ‡จ Checks the logs โ‡จ Finds the root cause โ‡จ Plans a fix โ‡จ Applies it โ‡จ Summarizes the change It figures things out, just like a great engineer would. Crazy times. โ€”- P.S. Lovable is also providing us with a guide to build web apps easily. Like + Comment โ€˜Lovableโ€™ and Iโ€™ll share the link.

29k

AI Agents Cheat Sheet ------------ This is a good starting point if you're trying to make sense of AI agents. Thereโ€™s a lot of talk about agent frameworks right now, but at the core, most of them build on the same set of ideas. This cheat sheet gives a simple overview of the key building blocks, from LLMs to orchestration to protocols. Useful whether youโ€™re exploring agent tooling, building internal automations, or just trying to understand the space better. What is an AI agent? - Agents combine reasoning with the ability to take action. - They donโ€™t just respond, they can plan, call tools, access data, and trigger real-world effects. Language Model - This is the core reasoning engine. It interprets input and generates plans or responses. - But by itself, it canโ€™t take real-world actions. Tools - APIs, functions, and external integrations that agents use to do useful work, like querying a database, sending an email, or calling a webhook. Orchestration Layer - This layer coordinates what the agent does, how it reasons (via CoT, ReAct, etc.), how it sequences steps, and how it interacts with tools. Agentic Protocols - Protocols like MCP and A2A enable agents to collaborate across platforms (e.g., Slack, GitHub) and maintain context across tasks. Building AI Agents - Thereโ€™s no single way to build an agent. - Some start with a single prompt. Others use low-code platforms. Some teams build full custom frameworks. The cheat sheet maps out the trade-offs. If youโ€™re trying to understand the agent space, or explain it to your team, this breakdown might help. Save it if you want a reference to come back to. ----------- Follow @techNmak

53k

LLM Generation Parameters These are the primary controls used to influence the output of a Large Language Model. 1./ Temperature โ—พย Controls the randomness of the output. It is applied to the probability distribution of the next possible tokens. โ—พย Low Temperature (e.g., 0.2): Makes the output more deterministic and focused. The model will almost always select the most probable next token. (ideal for factual tasks like summarization, code generation, and direct Q&A) โ—พย High Temperature (e.g., 1.0): Makes the output more random and creative. The model is more likely to select less probable tokens, leading to more diverse and novel text. (useful for creative writing, brainstorming, and open-ended conversation) 2./ Top-p โ—พย Controls randomness by selecting from a dynamic subset of the most probable tokens. The model considers the smallest set of tokens whose cumulative probability is greater than or equal to the p value. โ—พย Ex: If top_p is 0.9, the model considers only the tokens that make up the top 90% of the probability mass for the next choice, discarding the remaining 10%. โ—พย It provides a good balance between randomness and preventing the model from choosing bizarre or nonsensical tokens. It is often recommended as an alternative to temperature. 3./ Top-k โ—พย Controls randomness by restricting the model's choices to the k most likely next tokens. โ—พย Ex: If top_k is 50, the model will only consider the 50 most probable tokens for its next selection, regardless of their combined probability. โ—พย It prevents very low-probability tokens from being selected, which can make the output more coherent and less erratic than high-temperature sampling alone. 4./ Max Length / Max New Tokens โ—พย This parameter sets a hard limit on the number of tokens (words or word pieces) the model can generate in a single response. โ—พย It is essential for controlling response length, managing computational costs, and preventing runaway or endlessly rambling outputs. 5./ Frequency Penalty โ—พย A value (typically between -2.0 and 2.0) that penalizes tokens based on how often they have already appeared in the generated text so far. โ—พย + Value (e.g., 0.5): Decreases the likelihood of the model repeating the exact same words, encouraging more linguistic variety. โ—พย Zero Value (0.0): No penalty is applied. 6./ Presence Penalty โ—พย A value (typically between -2.0 and 2.0) that penalizes tokens simply for having appeared in the text at all, regardless of their frequency. โ—พย + Value (e.g., 0.5): Encourages the model to introduce new concepts and topics, as it is penalized for reusing any token, even once. โ—พย This is particularly useful for preventing the model from getting stuck on a single idea or topic. 7. / Stop Sequences โ—พย A user-defined string or list of strings that will immediately stop the generation process if the model produces it. โ—พย It is crucial for controlling the structure of the output, creating formatted text, or simulating conversational turn-taking. Follow - @techNmak

14k

People with Analyst archetype

The Analyst

Insights on web3 with an d-absurd approach. Your favorite KOL's ghostwriter โœ๏ธ Advocate @Seraph_global | SMM @Atleta_Network | Prev. @DexCheck_io

1k following10k followers
The Analyst

็ ”็ฉถๅ‘ๅฏผ |็ฉบๆŠ•็ŒŽไบบ |็ฉบๆŠ•ๆ•™็จ‹ | ็ฉบๆŠ•ไผ˜่ดจไฟกๆฏ ๏ฝœ็ƒญ่กท็ ”็ฉถๆ–ฐไบ‹็‰ฉ๏ฝœๆŒ–็Ÿฟ๏ฝœๅœŸ็‹—็ˆฑๅฅฝ่€…๏ฝœๆ‰“ๆ–ฐ๏ฝœGamefi๏ฝœDeFi๏ฝœNFT๏ฝœๆ’ธๆฏ›๏ฝœWEB3๏ฝœDM for Colla๏ฝœVX: jya777222

3k following119k followers
The Analyst

Product Designer & AI Explorer Crafting UI/UX for blockchain Sharing tech insights๏ฝœLearn in public Open to new projects and collaborations๐Ÿช„

300 following1k followers
The Analyst

crypto enthusiast,

1k following2k followers
The Analyst

Bitcoin, Materials Science PhD โšก๏ธ Analytics, tools, and guides โšก๏ธ Bitcoin Data Lounge Host

624 following28k followers
The Analyst

Crypto comms pro ๐Ÿ—ฃ๏ธ | Alpha @cookiedotfun ๐Ÿช | Building @BioProtocol ๐Ÿงฌ | Hyping @KaitoAI ๐Ÿค– | Growing @wallchain_xyz ๐Ÿš€ | DeSci & AI fan ๐ŸŒ #Web3

1k following1k followers
The Analyst

Chief scientist at Redwood Research (@redwood_ai), focused on technical AI safety research to reduce risks from rogue AIs

4 following6k followers
The Analyst

Crypto enthusiast || Content writer || Mathematician || God over allโ€ฆ pfp - @doginaldogsx

1k following2k followers
The Analyst

Advisor at @StudioYashico Artist behind @cubescrew

1k following3k followers
The Analyst

ๆฏๆ—ฅalpha็ ”็ฉถ๏ผŒไธ“ๆณจๅฅ—ๅˆฉใ€็ฉบๆŠ• @Polymarket ่ง‚ๅฏŸใ€็ ”็ฉถ ๆ‰€ๆœ‰ๅ†…ๅฎนๅ‘็š„ๅ†…ๅฎน้ƒฝๆ˜ฏๆ€่€ƒไธŽ่ฎฐๅฝ•,้žๆŠ•่ต„ๅปบ่ฎฎ

2k following4k followers
The Analyst

้ƒฝๆฅweb3ไบ†๏ผŒๆœ‰ไป€ไนˆ่พ›่‹ฆๅฏ่ฏด๏ผŒไป€ไนˆ้ƒฝๆ’ธ๏ผŒไปŽไธๅ่ง๏ผ 2025ๆ้ซ˜ๆ•ˆ็އ๏ผŒไธ€่ตทๆšดๅฏŒ ๏ผŒๅŠ ๅ…ฅๆป‘็ฟ”ๆœบ๏ผŒไธ€่ตทๆŽข็ดข AIFI @glider_fi

1k following1k followers
The Analyst

Research @carnegiemellon | Building CollabSphere.ai and PE AI native platform | prev @Dream11

736 following190 followers

Explore Related Archetypes

If you enjoy the analyst profiles, you might also like these personality types:

Supercharge your ๐• game,
Grow with SuperX!

Get Started for Free