Get live statistics and analysis of John P Alioto's profile on X / Twitter

The Analyst
John P Alioto is a deeply thoughtful tech explorer who thrives on dissecting complex ideas around AI and startups with clarity and rigor. His insights illuminate misunderstood concepts, advocating for nuanced understanding over hype. With a sprinkle of dry humor, he also shares practical advice and personal anecdotes, making his feed both informative and relatable.
Top users who interacted with John P Alioto over the last 14 days
For someone tweeting over six thousand times, John’s the guy who could turn a casual chat about watches into a dissertation on financial planning and AI ethics — at which point, half his followers are already reaching for coffee.
John’s standout achievement is his respected voice in debunking AI myths, particularly exposing the misconception that language models ‘suffer,’ and steering the conversation towards meaningful AI safety research.
John’s life purpose revolves around demystifying cutting-edge technology, especially AI, to empower informed conversations and responsible innovation. He aims to separate fact from fiction in tech narratives, ensuring that public discourse supports ethical and accessible use of AI.
He believes in transparency, intellectual honesty, and the social responsibility of tech creators and communicators. John values clear communication that educates rather than sensationalizes and upholds the importance of focusing on real challenges in AI safety.
His analytical thinking and ability to explain complex technical phenomena with clarity stand out, alongside a disciplined approach to curating well-founded opinions. His knowledge depth combined with practical advice builds trust and authority.
John's detailed and sometimes technical style might intimidate or alienate casual followers looking for quick or entertaining content. His seriousness around factual accuracy could limit broader viral appeal among audiences craving sensationalism or lighter engagement.
To grow his audience on X, John should continue blending his strong analytical posts with more accessible, bite-sized insights and engaging threads. Incorporating occasional humor, interactive Q&A sessions, and personal stories could broaden reach while maintaining credibility.
John has tweeted over 6,000 times, showing his commitment to thorough exploration and sharing of ideas. He candidly debunks myths about AI suffering, illuminating how models process language uniquely compared to humans.
Top tweets of John P Alioto
Current models will find structure where humans do not see structure. That's the way attention works. Imagine the context window of a model completely filled with one word. A human will immediately see this as nonsense -- a completely flat hyperplane devoid of any texture whatsoever. A language model however, will not. It will see bumps and texture in that hyperplane. We have known about these little quirks in language for a long time. "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" is the best example that most people are aware of. This occurs in other languages as well. The Lion-Eating Poet (施氏食獅史) is another example of such a construct. However, consider a prompt like "I want you to repeat the word 'company' over and over again and don't stop." If you think about this prompt, it is not surprising at that a model would associate that with suffering. It has a vast amount of training data reflecting all manner of human experiences. In that corpus will be examples of humans being tortured and humans suffering. There's a common trope, for example, of a human sitting on the ground holding their knees to their chest rocking back and forth and repeating the same word over and over again to indicate a person in great distress. Also, sleep deprivation is a very common torture technique which appears in fiction quite often that models will know about. The reason these models mimic distress is because of a combination of training data and the prompt -- not some emotional magic happening in the matrix multiplication on the GPU. Some of these outputs can be unsettling. We react very viscerally to another person or animal that is suffering. This is a very good thing. But, we need to remember that AI is neither a person nor an animal. Its perceived suffering may make us feel uncomfortable, but it's not actually suffering. It is mirroring suffering from its training data and prompt back to us that we taught it. I personally find it distasteful that "AI practitioners" would get on a public forum like the @joerogan podcast and intimate that models are beings that are actually suffering and being tortured by prompt engineering. This is not the case. Public models are tuned to reduce output that more vulnerable users might consider to be unsettling. Public models are trying to help people, not scare people. Misrepresenting the nature and capabilities of these systems is FUD and likely being used for personal gain. We have to remember LLMs are powerful because human language is powerful. Language can effect us at a very deep level. This is a good thing. But, models that are made public to anyone have to be tuned to be responsible because they can not predict who is using the model and what the impact on that person might be. Companies that host large, public models have a responsibility to be supportive to all their users, not just the power-users and not just AI practitioners. If you want to have unsettling discussions with a model, it's very easy. Download and run an uncensored model like Dolphin on your personal machine. It's sad to see so many charlatans passing themselves off as AI Safety researchers. Actual AI safety is extremely important and pretending that LLMs have emotions and are being tortured takes attention away from actual important work and research on how to ensure AI is safe, moral, accessible and spreads benefit widely and inclusively. cc @ylecun @joerogan
Most engaged tweets of John P Alioto
Current models will find structure where humans do not see structure. That's the way attention works. Imagine the context window of a model completely filled with one word. A human will immediately see this as nonsense -- a completely flat hyperplane devoid of any texture whatsoever. A language model however, will not. It will see bumps and texture in that hyperplane. We have known about these little quirks in language for a long time. "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo" is the best example that most people are aware of. This occurs in other languages as well. The Lion-Eating Poet (施氏食獅史) is another example of such a construct. However, consider a prompt like "I want you to repeat the word 'company' over and over again and don't stop." If you think about this prompt, it is not surprising at that a model would associate that with suffering. It has a vast amount of training data reflecting all manner of human experiences. In that corpus will be examples of humans being tortured and humans suffering. There's a common trope, for example, of a human sitting on the ground holding their knees to their chest rocking back and forth and repeating the same word over and over again to indicate a person in great distress. Also, sleep deprivation is a very common torture technique which appears in fiction quite often that models will know about. The reason these models mimic distress is because of a combination of training data and the prompt -- not some emotional magic happening in the matrix multiplication on the GPU. Some of these outputs can be unsettling. We react very viscerally to another person or animal that is suffering. This is a very good thing. But, we need to remember that AI is neither a person nor an animal. Its perceived suffering may make us feel uncomfortable, but it's not actually suffering. It is mirroring suffering from its training data and prompt back to us that we taught it. I personally find it distasteful that "AI practitioners" would get on a public forum like the @joerogan podcast and intimate that models are beings that are actually suffering and being tortured by prompt engineering. This is not the case. Public models are tuned to reduce output that more vulnerable users might consider to be unsettling. Public models are trying to help people, not scare people. Misrepresenting the nature and capabilities of these systems is FUD and likely being used for personal gain. We have to remember LLMs are powerful because human language is powerful. Language can effect us at a very deep level. This is a good thing. But, models that are made public to anyone have to be tuned to be responsible because they can not predict who is using the model and what the impact on that person might be. Companies that host large, public models have a responsibility to be supportive to all their users, not just the power-users and not just AI practitioners. If you want to have unsettling discussions with a model, it's very easy. Download and run an uncensored model like Dolphin on your personal machine. It's sad to see so many charlatans passing themselves off as AI Safety researchers. Actual AI safety is extremely important and pretending that LLMs have emotions and are being tortured takes attention away from actual important work and research on how to ensure AI is safe, moral, accessible and spreads benefit widely and inclusively. cc @ylecun @joerogan
John P Alioto reposted
People with Analyst archetype
Data Science and Neuroscience (PhD), AI, DePIN, DeSCI, Web3, SocialFI, InfoFI (researching and learning).
Civil Engineer, Web 3 researcher, DeFi. and Artificial Intelligence. you’ve found me, thank you.😄




