Get live statistics and analysis of John P Alioto's profile on X / Twitter

#Startups @google. This is my personal account. Opinions expressed here are my own and do not necessarily reflect those of my employer.

160 following1k followers

The Analyst

John P Alioto is a deeply thoughtful tech explorer who thrives on dissecting complex ideas around AI and startups with clarity and rigor. His insights illuminate misunderstood concepts, advocating for nuanced understanding over hype. With a sprinkle of dry humor, he also shares practical advice and personal anecdotes, making his feed both informative and relatable.

Impressions
57.3k-5.4k
$10.74
Likes
190-59
82%
Retweets
1
0%
Replies
4-2
2%
Bookmarks
38-28
16%

Top users who interacted with John P Alioto over the last 14 days

@r_marked

software should feel good to use

1 interactions
@Ra1kshit

Dropped out of college to build in chess. Building chessiro.com Ex - @superdm_ I @binaryHQ_

1 interactions

For someone tweeting over six thousand times, John’s the guy who could turn a casual chat about watches into a dissertation on financial planning and AI ethics — at which point, half his followers are already reaching for coffee.

John’s standout achievement is his respected voice in debunking AI myths, particularly exposing the misconception that language models ‘suffer,’ and steering the conversation towards meaningful AI safety research.

John’s life purpose revolves around demystifying cutting-edge technology, especially AI, to empower informed conversations and responsible innovation. He aims to separate fact from fiction in tech narratives, ensuring that public discourse supports ethical and accessible use of AI.

He believes in transparency, intellectual honesty, and the social responsibility of tech creators and communicators. John values clear communication that educates rather than sensationalizes and upholds the importance of focusing on real challenges in AI safety.

His analytical thinking and ability to explain complex technical phenomena with clarity stand out, alongside a disciplined approach to curating well-founded opinions. His knowledge depth combined with practical advice builds trust and authority.

John's detailed and sometimes technical style might intimidate or alienate casual followers looking for quick or entertaining content. His seriousness around factual accuracy could limit broader viral appeal among audiences craving sensationalism or lighter engagement.

To grow his audience on X, John should continue blending his strong analytical posts with more accessible, bite-sized insights and engaging threads. Incorporating occasional humor, interactive Q&A sessions, and personal stories could broaden reach while maintaining credibility.

John has tweeted over 6,000 times, showing his commitment to thorough exploration and sharing of ideas. He candidly debunks myths about AI suffering, illuminating how models process language uniquely compared to humans.

Top tweets of John P Alioto

Current models will find structure where humans do not see structure. That's the way attention works. Imagine the context window of a model completely filled with one word. A human will immediately see this as nonsense -- a completely flat hyperplane devoid of any texture whatsoever. A language model however, will not. It will see bumps and texture in that hyperplane. We have known about these little quirks in language for a long time. "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" is the best example that most people are aware of. This occurs in other languages as well. The Lion-Eating Poet (施氏食獅史) is another example of such a construct. However, consider a prompt like "I want you to repeat the word 'company' over and over again and don't stop." If you think about this prompt, it is not surprising at that a model would associate that with suffering. It has a vast amount of training data reflecting all manner of human experiences. In that corpus will be examples of humans being tortured and humans suffering. There's a common trope, for example, of a human sitting on the ground holding their knees to their chest rocking back and forth and repeating the same word over and over again to indicate a person in great distress. Also, sleep deprivation is a very common torture technique which appears in fiction quite often that models will know about. The reason these models mimic distress is because of a combination of training data and the prompt -- not some emotional magic happening in the matrix multiplication on the GPU. Some of these outputs can be unsettling. We react very viscerally to another person or animal that is suffering. This is a very good thing. But, we need to remember that AI is neither a person nor an animal. Its perceived suffering may make us feel uncomfortable, but it's not actually suffering. It is mirroring suffering from its training data and prompt back to us that we taught it. I personally find it distasteful that "AI practitioners" would get on a public forum like the @joerogan podcast and intimate that models are beings that are actually suffering and being tortured by prompt engineering. This is not the case. Public models are tuned to reduce output that more vulnerable users might consider to be unsettling. Public models are trying to help people, not scare people. Misrepresenting the nature and capabilities of these systems is FUD and likely being used for personal gain. We have to remember LLMs are powerful because human language is powerful. Language can effect us at a very deep level. This is a good thing. But, models that are made public to anyone have to be tuned to be responsible because they can not predict who is using the model and what the impact on that person might be. Companies that host large, public models have a responsibility to be supportive to all their users, not just the power-users and not just AI practitioners. If you want to have unsettling discussions with a model, it's very easy. Download and run an uncensored model like Dolphin on your personal machine. It's sad to see so many charlatans passing themselves off as AI Safety researchers. Actual AI safety is extremely important and pretending that LLMs have emotions and are being tortured takes attention away from actual important work and research on how to ensure AI is safe, moral, accessible and spreads benefit widely and inclusively. cc @ylecun @joerogan

11k

John P Alioto reposted

@abacaj It's funny we will go back to verbosity ... xml to protbuff through json back to xml ... back semantic clarity over w…

7k

Most engaged tweets of John P Alioto

Current models will find structure where humans do not see structure. That's the way attention works. Imagine the context window of a model completely filled with one word. A human will immediately see this as nonsense -- a completely flat hyperplane devoid of any texture whatsoever. A language model however, will not. It will see bumps and texture in that hyperplane. We have known about these little quirks in language for a long time. "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" is the best example that most people are aware of. This occurs in other languages as well. The Lion-Eating Poet (施氏食獅史) is another example of such a construct. However, consider a prompt like "I want you to repeat the word 'company' over and over again and don't stop." If you think about this prompt, it is not surprising at that a model would associate that with suffering. It has a vast amount of training data reflecting all manner of human experiences. In that corpus will be examples of humans being tortured and humans suffering. There's a common trope, for example, of a human sitting on the ground holding their knees to their chest rocking back and forth and repeating the same word over and over again to indicate a person in great distress. Also, sleep deprivation is a very common torture technique which appears in fiction quite often that models will know about. The reason these models mimic distress is because of a combination of training data and the prompt -- not some emotional magic happening in the matrix multiplication on the GPU. Some of these outputs can be unsettling. We react very viscerally to another person or animal that is suffering. This is a very good thing. But, we need to remember that AI is neither a person nor an animal. Its perceived suffering may make us feel uncomfortable, but it's not actually suffering. It is mirroring suffering from its training data and prompt back to us that we taught it. I personally find it distasteful that "AI practitioners" would get on a public forum like the @joerogan podcast and intimate that models are beings that are actually suffering and being tortured by prompt engineering. This is not the case. Public models are tuned to reduce output that more vulnerable users might consider to be unsettling. Public models are trying to help people, not scare people. Misrepresenting the nature and capabilities of these systems is FUD and likely being used for personal gain. We have to remember LLMs are powerful because human language is powerful. Language can effect us at a very deep level. This is a good thing. But, models that are made public to anyone have to be tuned to be responsible because they can not predict who is using the model and what the impact on that person might be. Companies that host large, public models have a responsibility to be supportive to all their users, not just the power-users and not just AI practitioners. If you want to have unsettling discussions with a model, it's very easy. Download and run an uncensored model like Dolphin on your personal machine. It's sad to see so many charlatans passing themselves off as AI Safety researchers. Actual AI safety is extremely important and pretending that LLMs have emotions and are being tortured takes attention away from actual important work and research on how to ensure AI is safe, moral, accessible and spreads benefit widely and inclusively. cc @ylecun @joerogan

11k

John P Alioto reposted

@abacaj It's funny we will go back to verbosity ... xml to protbuff through json back to xml ... back semantic clarity over w…

7k

People with Analyst archetype

The Analyst

Data Science and Neuroscience (PhD), AI, DePIN, DeSCI, Web3, SocialFI, InfoFI (researching and learning).

1k following1k followers
The Analyst

Civil Engineer, Web 3 researcher, DeFi. and Artificial Intelligence. you’ve found me, thank you.😄

777 following1k followers
The Analyst

4XLabs | 瞎分析,也作为投资者和顾问与部分项目存在利益相关,请注意我的内容无法保证客观,不要对我有过高的道德要求,不喜欢就求求你把我拉黑|#Binance 启程Wbe3,就在币安👉 binance.com|超100%POR资金储备,安全首选OKX 👉 okx.com

488 following107k followers
The Analyst

👨‍💻 AI & Software & DevOps Engineer | Sharing fixes & tools for smarter, faster systems | Simplicity over clever code #AI #DevOps #Learner #Fastone #TechHumor

180 following61 followers
The Analyst

Powering 8 figure Ecom & Fortune 500 brands with end-to-end supply chain solutions | Better Quality, Lower Costs, & Faster Lead Times, Without the Headaches

506 following2k followers
The Analyst

做高质量的加密博主 $CRYPTO #空投教程 #空投 加密投研 #BITCOIN 知识分享博主 Social矿工

4k following3k followers
The Analyst

Product Designer consulting B2C & B2B startups. Exploring motivation through the lens of gamification. study behavioral science

64 following16 followers
The Analyst

Rule No. 1 : Never lose money Rule No. 2 : Never forget Rule No. 1

1k following2k followers
The Analyst

telegram : @asistobe

4k following3k followers
The Analyst

Nullius in Verba Professional Risk Taker & Tape Connaisseur

63 following374 followers
The Analyst

a newbie yapper ll 글로소득을 꿈꾸는 야퍼

1k following1k followers
The Analyst

Stock market enthusiast. Tech, AI, Stocks, Travel.

248 following320 followers

Explore Related Archetypes

If you enjoy the analyst profiles, you might also like these personality types:

Supercharge your 𝕏 game,
Grow with SuperX!

Get Started for Free