Get live statistics and analysis of Sterling Crispin 🕊️'s profile on X / Twitter

Artist + Software Developer / Married to @Helen_Crispin_ / Previously AR-VR and Neurotech

5k following43k followers

The Innovator

Sterling Crispin is a boundary-pushing artist and software developer with a rich history in AR, VR, and neurotechnology. Blending creativity with deep technical expertise, Sterling shapes future tech experiences that tap directly into human cognition. Passionate about immersive tech, he’s a voice of both visionary innovation and practical application.

Impressions
17.7M-17.3M
$3333.58
Likes
53.7k-49.5k
83%
Retweets
4.5k-4.4k
7%
Replies
1.3k-1k
2%
Bookmarks
5.3k-4.7k
8%

Top users who interacted with Sterling Crispin 🕊️ over the last 14 days

@mikegee

Designer, collector of digital objects, lowercase "a" artist. Product Design at @Shape_L2/@0xDecaArt, prev @rodeodotclub/@foundation, @delighted (co-founder).

2 interactions
2 interactions
@obj0x0

|| meme boy|| interested in speculative markets ||learn about things ||

2 interactions
@KCharlesRoss

Husband, father, FL native, Gator 4 life, EV owner, car enthusiast, photographer, ❤️ all things sci-fi & Star Trek 🖖🏼, ❤️making AI music, politically homeless

2 interactions
@samhayek_

Artist + Designer + Musician • Brand & Product | Ex: @audius

2 interactions
@cryptorookie

Founder @devilxdetail | onchain artist

1 interactions
@BoonaETH

Director of Media @shillrxyz l Storytelling @roversxyz

1 interactions
@Zum_Gee

*Grad(PsyD) Psycholology *MSc in Phrenological Semiotics, specialising supraorbital ridge hermeneutics. Research fellow UCLAM (London) #CraniognosticScholar

1 interactions
1 interactions
@iFurthur

crypto/art/memes enjoyer

1 interactions
1 interactions
@hymwalsndy1421

Investor • Collector | ApeChain x $APE x @ApeCoin 🦍 | Empowering Artists, Builders & Communities | All views my own Ambassador: @Fair3_community

1 interactions
@sunnydayzsoon

🇨🇦1/1 Vector Aboriginal Artist. Featured artist @beeple Pepefest, Election Night & Select Start

1 interactions
@hazme11

بكالوريوس إعلام وإتصال - { وَقَالَ الرَّسُولُ يَا رَبِّ إِنَّ قَوْمِي اتَّخَذُوا هَٰذَا الْقُرْآنَ مَهْجُورًا }

1 interactions
1 interactions
@punk1376

Visita Interiora Terrae Rectificando Invenies Occultum Lapidem 🜁 🜃 🜁 MDCXIX 🜁 🜃 🜁

1 interactions
@Mjreard

Effective Altruism stan. Frmly @80000hours. I want to do a podcast with @chanamessinger and she's kind of letting me.

1 interactions
@Neovo903

So, what do I write here?

1 interactions
1 interactions
1 interactions

Sterling's tweet count is so astronomical, if Twitter charged rent per tweet, he’d own half the platform by now—maybe it’s time to stop tweeting and let his brain-computer interfaces do the talking!

A landmark achievement is Sterling's foundational and patent-backed work on the Vision Pro, which integrates neurotechnology to anticipate user actions, marking a milestone in immersive computing and human-computer interaction.

To pioneer immersive and neuroadaptive technologies that enhance human experience by merging art, science, and AI, ultimately transforming how people interact with digital realities and their own minds.

Sterling believes in technology as an extension of human creativity and cognition and values innovation that respects natural human rhythms and wellbeing, such as advocating for natural light color balances and ethically enhancing user mindfulness. He sees technology not just as tools but as cognitive experiences that can improve attention, learning, and emotional states.

His unique combination of high-level technical skill and artistic sensibility allows him to create cutting-edge immersive technologies that are both visionary and user-centered. He knows how to translate complex neurotech concepts into practical, transformative products.

With over 38,000 tweets and following nearly 6,000 accounts, Sterling might struggle with spreading focus too thin, potentially overwhelming followers or diluting his core message amid a vast amount of detailed content.

To grow his audience on X, Sterling should curate his immense expertise into regular, digestible threads that mix tech insights with stunning visual art, while engaging more directly with his followers through Q&As or live spaces. Using storytelling to humanize complex neurotech ideas will invite broader interest beyond just tech enthusiasts.

Fun fact: Sterling helped develop Apple’s Vision Pro, contributing to neurotechnology patents that predict user intent through eye movements, essentially building a 'mind-reading' interface without invasive surgery.

Top tweets of Sterling Crispin 🕊️

I spent 10% of my life contributing to the development of the #VisionPro while I worked at Apple as a Neurotechnology Prototyping Researcher in the Technology Development Group. It’s the longest I’ve ever worked on a single effort. I’m proud and relieved that it’s finally announced. I’ve been working on AR and VR for ten years, and in many ways, this is a culmination of the whole industry into a single product. I’m thankful I helped make it real, and I’m open to consulting and taking calls if you’re looking to enter the space or refine your strategy. The work I did supported the foundational development of Vision Pro, the mindfulness experiences, ▇▇▇▇▇▇ products, and also more ambitious moonshot research with neurotechnology. Like, predicting you’ll click on something before you do, basically mind reading. I was there for 3.5 years and left at the end of 2021, so I’m excited to experience how the last two years brought everything together. I’m really curious what made the cut and what will be released later on. Specifically, I’m proud of contributing to the initial vision, strategy and direction of the ▇▇▇▇▇▇ program for Vision Pro. The work I did on a small team helped green light that product category, and I think it could have significant global impact one day. The large majority of work I did at Apple is under NDA, and was spread across a wide range of topics and approaches. But a few things have become public through patents which I can cite and paraphrase below. Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences. So, a user is in a mixed reality or virtual reality experience, and AI models are trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state. And these may be inferred through measurements like eye tracking, electrical activity in the brain, heart beats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc. There were a lot of tricks involved to make specific predictions possible, which the handful of patents I’m named on go into detail about. One of the coolest results involved predicting a user was going to click on something before they actually did. That was a ton of work and something I’m proud of. Your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user's brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticipatory pupil response. It’s a crude brain computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day. Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it. Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background. All of these details are publicly available in patents, and were carefully written to not leak anything. There was a ton of other stuff I was involved with, and hopefully more of it will see the light of day eventually. A lot of people have waited a long time for this product. But it’s still one step forward on the road to VR. And it’s going to take until the end of this decade for the industry to fully catch up to the grand vision for this tech. Again, I’m open to consulting work and taking calls if your business is looking to enter the space or refine your strategy. Mostly, I’m proud and relieved this has finally been announced. It’s been over five years since I started working on this, and I spent a significant portion of my life on it, as did an army of other designers and engineers. I hope the whole is greater than the sum of the parts and Vision Pro blows your mind.

9M

when I was 16 I cornered the market on fishfeed for about a week and was the only supplier on the english speaking internet in the early 2000s a family friend had a huge home aquarium, wall to wall floor to ceiling with sharks and eels and shit. I think he was a coke dealer. I helped him maintain it through junior high school and was spending a lot of time on a forum called reefcentral learning about lighting systems, taking care of the fish he had, and helping him source 50 pound bags of bulk fishmeal. i realized how much money was involved all this and started flipping chinese spirulina powder and shrimp meal on ebay and reefcentral that I was buying for nothing on alibaba, and printing out labels to cover theirs that said FishFeed International. I was probably clearing a few hundred dollars a week, nothing crazy but I was 16 this guy on the buy/sell forums said he needed two metric tons of protein and I figured he was trolling but I told him I could get as much as he wanted, so I tried to set up a direct shipment between the source I was buying from on alibaba and him. But they wouldn't give me that much credit without a registered importer, so I used my FishFeed International name and my friends marine contracting license and we took 15% off the top just for brokering it within a couple of weeks the buy/sell forums were all posts like that trying to get tons of fishmeal. Everyones suppliers were either raising prices or totally out. I tried to repeat the arbitrage but the seller on alibaba was asking for prepayment up front and the other sellers I tried wanted insane prices I thought it was over, but at the time the most replied to post on reefcentral was a petition for trying to stop krill harvesting near scotland, it seemed like a lost cause to me and I went looking to see if there was a way to buy the krill. Back then the internet sucked and it wasn't uncommon to find open FTP servers. I found the backend of the government bidding system for the krill harvest and just edited FishFeed International into the bidding spreadsheet with an order for 12 metric tons. I didn't really realize how insane that volume was but I had just sold 2 tons and there were posts all over the forums asking for more. Later I learned the guy who bought the 2 tons was supplying a bunch of zoos The krill harvest came in just as la nina peaked and a huge storm hit the whole west side of south america. Anchovy prices exploded and the fishmeal stock was gone. People were begging for supply on reefcentral and nobody had any there was no exchange or pricing structure , i just said i had a bioequivalent substitute and started taking orders quoting people 10x a normal price, nobody blinked. I got $20k worth of orders in the first day and by the end of the week I had over $300k in my paypal i started forwarding the delivery addresses and weights from the orders to the krill collectors and paid about 1/5th what I was charging on reefcentral for it it maybe lasted three weeks until I ran out of krill to arbitrage and by that point big companies were undercutting me anyway but for a brief moment in time I ruled with an iron fist

302k

*Slavoj Zizek sniffs loudly touching his nose* Ah yes yes the corporate pastoral. This is a perfect ideological object. You see, you see here Sam, he pretends to give us a modest, one could say intimate conversation between two friends. No. This is a mythopoetic act of ideological laundering. *Sniff* This is not, you see, this is not a product announcement. No. Sam is giving us a myth, you see. *Sniff* This film, if one can call such a thing a film, is a strategic myth making artefact to construct an origin story. This is a biopic about themselves played by themselves, you see. Masturbation. *Sniff* The hubris is astounding. *Sniff* This opens, what do we see, what do we see first? *Sniff* We see the power shot, the money shot yes? Looking up from the ground up into the towering city. We are small. Silicon Valley is big. Money you see, technology, forbidden sex, and so on. San Francisco to be precise. Yes? *Sniff* Then what? Soft lighting, yes, soft focus, flowers in the city and so on. Two friends. No. *Sniff* These are not two friends. These are the ideologues of techno capital, you see. *Sniff*. The priests preparing you for the sacrament of their new device. *Sniff* They smile, you see, "We know we are building godlike machines, but we are such nice people!" *Sniff* Yes very good Sam. And thank you. They shake our hands and smile, while the machine, you see, the machine takes our jobs and our soul, or what have you. *Sniff* Yes? But the cafe, what a nice cafe I must say. The cafe invites us in. Not just two friends together, you see. We are, as the viewer, we are also their third friend. *Sniff* Perhaps lover, or some such. We will see how the night goes, you know? *Sniff* But Sam, yes, he does not care about money. Power? No. *Sniff* This boy king, the caring sovereign, he worries about us. The little people you see. *Sniff* The master holds the weight, the original sin, so we may enjoy without guilt. *Sniff* Sam is our Jesus, so we can ask the computer our little questions and not worry. Not to worry about the labor, or the exploitation, or the environmental costs and so on. *Sniff* So you see, this is, you see this is to build a moral legitimacy around leadership figures at at a time when, you know *Sniff* AI is taking our jobs, and "How will I feed my family" and "Oh no we are all going to die" and so on. *Sniff* Notice. Notice they do not talk about technology you see. Values. *Sniff* This is the hand of ideology that distracts you while the other, the other hand you see, it takes from you. But thank you for the coffee Jony, and yes the iPhone. *Sniff* Jony is the great thinker Yes? And Jony gives us emotional connection, yes? And the family man stands hand in hand with Sam, as deliberate contrast to the tech overlord he pretends not to be, you see. *Sniff* The European family man with the children and, you know, the forbidden sexual desires of San Francisco and what have you. *Sniff* We have our origin story, the myth you see, chance encounters in the cafe. *Sniff* "It is funny running into you here Jony!" Yes it must be nice what are the odds of such an encounter. *Sniff* And what? Personal anecdotes they give us, yes, some shared vision to reinforce authenticity in a film which, let us be honest for a moment, this could very well be in the post credits scene of a marvel movie or, you know *Sniff* some such profane act of capitalist entertainment, a mickey mouse adventure or what have you. *Sniff* And for what? For what is all of this labor and AI and devices and so on *Sniff* So we can find ourselves in a moment, lost, "Oh no" we say, "Oh no I wonder. I have a question to which there is no answer, I cannot think for myself?" *Sniff* No. Let us go to the phone yet again. We must ask the device. But no. You see. *Sniff* My phone is in my pocket. We need a new device you see. Yes? *Sniff* A device which, a device which always listens, you see. *Sniff* An all knowing god who can answer my questions, and tell me what to think, and what should I say, *Sniff* what should I do, and I am afraid and oh no who am I and I am sad and who should I fuck and so on. Yes? *Sniff* So you see *Sniff* This film is not a documentary yes? *Sniff* What we see here is christ carrying the cross alone through the streets. *Sniff* But not a cross you see. Jony. Jony gives us the iPhone and Sam puts AI inside the device. *Sniff* And now? Now we don't need to think *Sniff* We can just have the AI and the AI, you see, *Sniff* It's wonderful.

642k

One aspect of #VisionPro that I think people haven't fully been able to appreciate is the advanced Spatial Audio system. It's insane, and the whole device is really the most advanced consumer electronics device ever created. The audio system creates the feeling that sounds are coming from the environment around you. The system measures the geometry of your ears and head with an iPhone depth camera to create a personal profile. It also analyzes your room's acoustic properties, including the physical materials. And when a sound is played it uses audio raytracing. What's all that mean? Imagine you're in nature and you hear a bird chirp to the left of you. The sound waves the bird makes travel in all directions, bouncing off of trees, the grass, your body, and finally into your ears. Some of those objects dampen the sound, some reflect the sound, others bend the sound. And the sound takes a slightly different amount of time to reach each of your ears. Then the sound echos off of the unique shape of your ears, which changes its pitch, and all of this lets your brain know what direction the bird chirp came from, and also what else is in your environment. So again, the advanced spatial audio system of Vision Pro knows the geometry of your ears if you've calibrated that with an iPhone, it analyzes the acoustic properties of your room including the physical materials, and simulates the propagation of sound using audio ray tracing. I don't think theres any other system on earth capable of this. When you're in a face time call, it sounds as if people are speaking right from where they are. That sounds obvious and like a simple feature, but this is impossible to fully appreciate without experiencing it. George Lucas is famous for saying sound is more than half the experience when watching a movie. And there's really no way of fully communicating the spatial audio system without hearing it for yourself. Imagine putting on a high end pair of headphones to Vision Pro and watching a 3D movie with a screen that feels 100 feet wide.

1M

Vision Pro mega-thread 1/5: My advice for designing and developing products for Vision Pro. This thread includes a basic overview of the platform, tools, porting apps, general product design, prototyping, perceptual design, business advice and more. Disclaimer: I’m not an Apple representative. This is my personal opinion and does not contain non-public information. Overview: Apps on visionOS are organized into “scenes”, which are Windows, Volumes, and Spaces. Windows are a spatial version of what you’d see on a normal computer. They’re bounded rectangles of content that users surround themselves with. These may be windows from different apps or multiple windows from one app. Volumes are things like 3D objects, or small interactive scenes. Like a 3D map, or small game that’s not immersive. Spaces are fully immersive experiences where only one app is visible. That could be full of many Windows and Volumes from your app. Or like VR games where the system goes away and it's all custom content. You can think of visionOS itself like a Shared Space where apps coexist together and you have less control. Whereas Full Spaces give you the most control and immersiveness, but don’t coexist with other apps. Spaces have immersion styles: mixed, progressive, and full. Which defines how much or little of the real world you want the user to see. User Input: Users can look at the UI and pinch like the demo videos show. But you can also reach out and tap on windows directly, sort of like it’s actually a floating iPad. Or use a bluetooth trackpad or video game controller. You can also look and speak in search bars, but that’s disabled by default for some reason on existing iPad and iOS apps running on Vision Pro. There’s also a Dwell Control for eyes-only input, but that’s really an accessibility feature. For a simple dev approach, your app can just use events like a TapGesture. In this case, you won't need to worry about where these events originate from. Spatial Audio: Vision Pro has an advanced spatial audio system that makes sounds seem like they’re really in the room by considering the size and materials in your room. Using subtle sounds for UI interaction and taking advantage of sound design for immersive experiences is going to be really important. Make sure to take this topic seriously. Development: If you want to build something that works between Vision Pro, iPad, and iOS, you'll be operating within the Apple dev ecosystem, using tools like XCode and SwiftUI. However, if your goal is to create a fully immersive VR experience for Vision Pro that also works on other headsets like Meta's Quest or PlayStation VR, you have to use Unity. Apple Tools: For Apple’s ecosystem, you’ll use SwiftUI to create the UI the user sees and the overall content of your app. RealityKit is the 3D rendering engine that handles materials, 3D objects, and light simulations. You’ll use ARKit for advanced scene understanding. Like if you want someone to throw virtual darts and have them collide with their real wall, or do advanced things with hand tracking. But those rich AR features are only available in Full Spaces. There’s also Reality Composer Pro which is a 3D content editor that lets you drag things around a 3D scene and make media rich Spaces or Volumes. It’s like Diet-Unity that’s built specifically for this development stack. One cool thing with Reality Composer is that it’s already full of assets, materials, and animations. That helps developers who aren’t artists build something quickly and should help to create a more unified look and feel to everything built with the tool. Pros and cons to that product decision, but overall it should be helpful. Existing iOS Apps: If you're bringing an iPad or iOS app over, it will probably work unmodified as a Window in the Shared Space. If your app supports both iPad and iPhone, it’ll look like the iPad version. You can use the Ornament API to make little floating islands of UI in front of, or besides your app, to make it feel more spatial. But that’s not something all existing apps get automatically. Ironically, if your app is using a lot of ARKit features, you’ll likely need to ‘reimagine’ it significantly as ARKit has been upgraded a lot. If you’re excited about building something new for Vision Pro, my personal opinion is that you should prioritize how your app will provide value across iPad and iOS too. Otherwise you're losing out on hundreds of millions of users. Unity: You can build to Vision Pro with the Unity game engine, which is a massive topic. Again, you need to use Unity if you’re building to Vision Pro as well as a Meta headset like the Quest or PSVR. Unity supports building Bounded Volumes for the Shared Space which exist alongside native Vision Pro content. And Unbounded Volumes, for immersive content that may leverage advanced AR features. Finally you can also build more VR-like apps which give you more control over rendering but seem to lack support for AR Kit scene understanding like plane detection. The Volume approach gives RealityKit more control over rendering, so you have to use Unity’s PolySpatial tool to convert materials, shaders, and other features. Unity support for Vision Pro allows for tons of interactions you’d expect to see in VR, like teleporting to a new location or picking up and throwing virtual objects.

1M

Most engaged tweets of Sterling Crispin 🕊️

I spent 10% of my life contributing to the development of the #VisionPro while I worked at Apple as a Neurotechnology Prototyping Researcher in the Technology Development Group. It’s the longest I’ve ever worked on a single effort. I’m proud and relieved that it’s finally announced. I’ve been working on AR and VR for ten years, and in many ways, this is a culmination of the whole industry into a single product. I’m thankful I helped make it real, and I’m open to consulting and taking calls if you’re looking to enter the space or refine your strategy. The work I did supported the foundational development of Vision Pro, the mindfulness experiences, ▇▇▇▇▇▇ products, and also more ambitious moonshot research with neurotechnology. Like, predicting you’ll click on something before you do, basically mind reading. I was there for 3.5 years and left at the end of 2021, so I’m excited to experience how the last two years brought everything together. I’m really curious what made the cut and what will be released later on. Specifically, I’m proud of contributing to the initial vision, strategy and direction of the ▇▇▇▇▇▇ program for Vision Pro. The work I did on a small team helped green light that product category, and I think it could have significant global impact one day. The large majority of work I did at Apple is under NDA, and was spread across a wide range of topics and approaches. But a few things have become public through patents which I can cite and paraphrase below. Generally as a whole, a lot of the work I did involved detecting the mental state of users based on data from their body and brain when they were in immersive experiences. So, a user is in a mixed reality or virtual reality experience, and AI models are trying to predict if you are feeling curious, mind wandering, scared, paying attention, remembering a past experience, or some other cognitive state. And these may be inferred through measurements like eye tracking, electrical activity in the brain, heart beats and rhythms, muscle activity, blood density in the brain, blood pressure, skin conductance etc. There were a lot of tricks involved to make specific predictions possible, which the handful of patents I’m named on go into detail about. One of the coolest results involved predicting a user was going to click on something before they actually did. That was a ton of work and something I’m proud of. Your pupil reacts before you click in part because you expect something will happen after you click. So you can create biofeedback with a user's brain by monitoring their eye behavior, and redesigning the UI in real time to create more of this anticipatory pupil response. It’s a crude brain computer interface via the eyes, but very cool. And I’d take that over invasive brain surgery any day. Other tricks to infer cognitive state involved quickly flashing visuals or sounds to a user in ways they may not perceive, and then measuring their reaction to it. Another patent goes into details about using machine learning and signals from the body and brain to predict how focused, or relaxed you are, or how well you are learning. And then updating virtual environments to enhance those states. So, imagine an adaptive immersive environment that helps you learn, or work, or relax by changing what you’re seeing and hearing in the background. All of these details are publicly available in patents, and were carefully written to not leak anything. There was a ton of other stuff I was involved with, and hopefully more of it will see the light of day eventually. A lot of people have waited a long time for this product. But it’s still one step forward on the road to VR. And it’s going to take until the end of this decade for the industry to fully catch up to the grand vision for this tech. Again, I’m open to consulting work and taking calls if your business is looking to enter the space or refine your strategy. Mostly, I’m proud and relieved this has finally been announced. It’s been over five years since I started working on this, and I spent a significant portion of my life on it, as did an army of other designers and engineers. I hope the whole is greater than the sum of the parts and Vision Pro blows your mind.

9M

One aspect of #VisionPro that I think people haven't fully been able to appreciate is the advanced Spatial Audio system. It's insane, and the whole device is really the most advanced consumer electronics device ever created. The audio system creates the feeling that sounds are coming from the environment around you. The system measures the geometry of your ears and head with an iPhone depth camera to create a personal profile. It also analyzes your room's acoustic properties, including the physical materials. And when a sound is played it uses audio raytracing. What's all that mean? Imagine you're in nature and you hear a bird chirp to the left of you. The sound waves the bird makes travel in all directions, bouncing off of trees, the grass, your body, and finally into your ears. Some of those objects dampen the sound, some reflect the sound, others bend the sound. And the sound takes a slightly different amount of time to reach each of your ears. Then the sound echos off of the unique shape of your ears, which changes its pitch, and all of this lets your brain know what direction the bird chirp came from, and also what else is in your environment. So again, the advanced spatial audio system of Vision Pro knows the geometry of your ears if you've calibrated that with an iPhone, it analyzes the acoustic properties of your room including the physical materials, and simulates the propagation of sound using audio ray tracing. I don't think theres any other system on earth capable of this. When you're in a face time call, it sounds as if people are speaking right from where they are. That sounds obvious and like a simple feature, but this is impossible to fully appreciate without experiencing it. George Lucas is famous for saying sound is more than half the experience when watching a movie. And there's really no way of fully communicating the spatial audio system without hearing it for yourself. Imagine putting on a high end pair of headphones to Vision Pro and watching a 3D movie with a screen that feels 100 feet wide.

1M

The 'problem with AI art' stems from deep rooted cultural, psychological, and educational differences. Some key issues: - Most people mistakenly conflate craft with art. - Most don't know the last hundred years of art history, or intentionally reject it outright. - Most people fear change, and adapting your model of the world to a new world that's rapidly changing through technology is repulsive to many. Especially if it challenges too many closely held beliefs at once. - And generally, we all live with self imposed limitations, rules for thinking, how we're allowed to behave, boxed in by our own habits. Being confronted by someone living outside of that can shock people into anger. People would rather condemn different ways of thinking than question their own boundaries of thought. - Related to that last point, most people have a narrow set of rules for what is and is not art. Asking "Is this art?" is a dead end question. You're left with no deeper understanding of whatever it is you're looking at. It's better to ask: who made this, why, and how? What cultural forces might explain this? Can I understand this from a different perspective? But questioning an artwork and trying to understand it more deeply is something most people have never done with serious effort. It's easier to label a challenging artwork as 'not art' and run away from it. An open curious mind is the best way to experience art. On the topic of craft and art, there's a very popular school of thought that the more physical labor goes into creating an artwork, the better it is. Especially if the artwork is hyper realistic, and large. I think this is one of the most commonly and closely held beliefs about art, and people hate being challenged on it. If you showed most people two similar photorealistic paintings side by side, and then revealed that one had been painting using the aid of a projector, they'd quickly respond that using a projector is 'cheating' and might even say its no longer an artwork. It's worth pausing and questioning that for a moment. Why is it 'cheating'? Who created these rules you have to follow? Why are they 'the rules' and not some other rules? Or no rules at all? What rules do you personally have for art? Did you come up with them, or find them, or did someone tell you them? Back to projectors and 'cheating'. Realistic paintings from the European Renaissance period are often cited as examples of Art with a capital 'A'. But tracing projections was a technique used during the middle ages and renaissance periods via early optical techniques that eventually lead to the camera. People have been tracing projections for over 600 years, it's a core technique that helped the advent of perspective and realism in paintings. Its no more 'cheating' today than it was then. It was also common for these renaissance era artists to lay down the sketch for a painting, and then have an assistant do the actual painting. You'd be labeled a heretic and art-criminal these days if it was revealed you worked the same way. Sometimes, the way an artwork is made is a core part of its meaning, other times its not. Again, it's better to be curious and ask if it really matters for that piece, rather than to judge an artwork on a set of rules that may not be relevant to it. Eventually those early optical techniques and tech improved and we got the camera. Some people still don't think photography is art and it's been over 200 years. Photography directly challenges this idea that suffering for countless hours dedicated to the craft of realism is a fundamental part of what art is. If you can 'just' point and click for a realistic image then the definition of what art is and isn't has to change. How you understand quality changes. You have to expand your own self imposed limitations and rules. But if you know any photographers, you know they work just as hard as any other artist. But, generally, it's not the amount of labor that's important. Sometimes you can create an artwork very quickly, serendipitously, as if it was always there and you just had to reveal it. And that's just as valid. The art of photography, in part, is about curation, editing, and creative decisions. You start with the entire world and edit away everything you don't want to depict until you arrive at a singular image. You could let a million photographers loose in NYC and they'd all find something different. None of them created the city, the people living in it, and all the moments happening. But they'd each found a time, place, and perspective to capture and present as an artwork. Similarly, creating art with generative AI is a process of curation, editing, and creative decisions. Most artists working with the medium aren't training their own models. Like the photographers, they didn't create NYC. It's a similar creative process of navigating a huge space of potential images and narrowing in on something very specific. And if you know any artists working with AI, you know that it can take countless hours, generating thousands of images, to come across one or two you really truly like. And then there's the post process of editing and altering those images. But again, it's not the amount of labor that's important. The art is. That all being said, I can sympathise with artists, especially illustrators, whose name has become a prompt into a machine able to replicate their style quickly. But generative AI didn't create the phenomenon of being inspired by another artists work, or 'borrowing heavily', or plagiarism, or any other point on that spectrum. It's just changed what's possible and the speed of what's possible through new tools. And like the camera, it's going to force us to adapt ourselves to these new tools and expand our self imposed limitations and rules. Artists have largely already worked through questioning all of this over the last 100 years or so. But like I said most people either don't know this history or reject it outright. Some people reject the more free form and flexible notion of what art can be and say, "if anything can be art than nothing is art and the word loses its meaning." Again I would say, it's more important to question and understand something, than to see which mental box of yours it fits in and then move on. It's fine to have preferences, but walking around labeling things as 'art' or 'not art' is a fruitless endeavor. The world gets more interesting, not less, as you increase the space of possible thoughts you allow yourself to think. Art is liberation from constraints. Art is radical freedom. Art is an infinite game with flexible rules, the goal is to continue play, not to win. An open curious mind is the best way to experience art.

1M

*Slavoj Zizek sniffs loudly touching his nose* Ah yes yes the corporate pastoral. This is a perfect ideological object. You see, you see here Sam, he pretends to give us a modest, one could say intimate conversation between two friends. No. This is a mythopoetic act of ideological laundering. *Sniff* This is not, you see, this is not a product announcement. No. Sam is giving us a myth, you see. *Sniff* This film, if one can call such a thing a film, is a strategic myth making artefact to construct an origin story. This is a biopic about themselves played by themselves, you see. Masturbation. *Sniff* The hubris is astounding. *Sniff* This opens, what do we see, what do we see first? *Sniff* We see the power shot, the money shot yes? Looking up from the ground up into the towering city. We are small. Silicon Valley is big. Money you see, technology, forbidden sex, and so on. San Francisco to be precise. Yes? *Sniff* Then what? Soft lighting, yes, soft focus, flowers in the city and so on. Two friends. No. *Sniff* These are not two friends. These are the ideologues of techno capital, you see. *Sniff*. The priests preparing you for the sacrament of their new device. *Sniff* They smile, you see, "We know we are building godlike machines, but we are such nice people!" *Sniff* Yes very good Sam. And thank you. They shake our hands and smile, while the machine, you see, the machine takes our jobs and our soul, or what have you. *Sniff* Yes? But the cafe, what a nice cafe I must say. The cafe invites us in. Not just two friends together, you see. We are, as the viewer, we are also their third friend. *Sniff* Perhaps lover, or some such. We will see how the night goes, you know? *Sniff* But Sam, yes, he does not care about money. Power? No. *Sniff* This boy king, the caring sovereign, he worries about us. The little people you see. *Sniff* The master holds the weight, the original sin, so we may enjoy without guilt. *Sniff* Sam is our Jesus, so we can ask the computer our little questions and not worry. Not to worry about the labor, or the exploitation, or the environmental costs and so on. *Sniff* So you see, this is, you see this is to build a moral legitimacy around leadership figures at at a time when, you know *Sniff* AI is taking our jobs, and "How will I feed my family" and "Oh no we are all going to die" and so on. *Sniff* Notice. Notice they do not talk about technology you see. Values. *Sniff* This is the hand of ideology that distracts you while the other, the other hand you see, it takes from you. But thank you for the coffee Jony, and yes the iPhone. *Sniff* Jony is the great thinker Yes? And Jony gives us emotional connection, yes? And the family man stands hand in hand with Sam, as deliberate contrast to the tech overlord he pretends not to be, you see. *Sniff* The European family man with the children and, you know, the forbidden sexual desires of San Francisco and what have you. *Sniff* We have our origin story, the myth you see, chance encounters in the cafe. *Sniff* "It is funny running into you here Jony!" Yes it must be nice what are the odds of such an encounter. *Sniff* And what? Personal anecdotes they give us, yes, some shared vision to reinforce authenticity in a film which, let us be honest for a moment, this could very well be in the post credits scene of a marvel movie or, you know *Sniff* some such profane act of capitalist entertainment, a mickey mouse adventure or what have you. *Sniff* And for what? For what is all of this labor and AI and devices and so on *Sniff* So we can find ourselves in a moment, lost, "Oh no" we say, "Oh no I wonder. I have a question to which there is no answer, I cannot think for myself?" *Sniff* No. Let us go to the phone yet again. We must ask the device. But no. You see. *Sniff* My phone is in my pocket. We need a new device you see. Yes? *Sniff* A device which, a device which always listens, you see. *Sniff* An all knowing god who can answer my questions, and tell me what to think, and what should I say, *Sniff* what should I do, and I am afraid and oh no who am I and I am sad and who should I fuck and so on. Yes? *Sniff* So you see *Sniff* This film is not a documentary yes? *Sniff* What we see here is christ carrying the cross alone through the streets. *Sniff* But not a cross you see. Jony. Jony gives us the iPhone and Sam puts AI inside the device. *Sniff* And now? Now we don't need to think *Sniff* We can just have the AI and the AI, you see, *Sniff* It's wonderful.

642k

Low liquidity coins on automated market makers with exponential price curves are shitcoins. You can call them Creator Coins, Culture Tokens, Internet Capital Markets, Music Tokens, AI Tokens, or Memecoins, but it won't change the toxic fundamentals. It's a zero sum PvP game of musical chairs. Nobody leaves with more than they came with unless someone else is losing. And retail gets rinsed by snipers, bundlers, coordinated pump and dumps, FNF groups, and industrial scale autonomous trading bots. Repeatedly pretending you don't know that is disingenuous, delusional and at best naive for anyone who's been in crypto for more than a year to suggest otherwise. You can't just rename "minting dogshit with no liquidity and putting it in a Uniswap pool" to Creator Coins and expect the outcome to be any different without at least trying to alter the tokenomics, or at least some kind of flywheel, to prevent the obvious outcome. 99.99999% of tokens like this go to zero with haste, anything that survives for more than a year is an extremely rare exception to the rule. I"m quoting Coop directly here, but really I'm talking to everyone making new token launchpads and shipping the same product over and over and over again promising different results. Also talking to anyone buying these coins and professing that, "Bro. I swear bro this time it's different. They're culture coins, creators bro. I swear bro, this time it's different. No no, there's an AI. You don't get it. The token is for a song and you're early to the song. No bro trust me. This is the future. It's not like literally every other coin with the exact same tokenomics. This time it's different bro. Buy my bags bro please. Please buy my bags. I am crying, pissing, and shitting in my pants bro trust me." And to be clear, sometimes I like trading shitcoins. It's occasionally fun to light money on fire trying to hit a 20x. But I know what game I'm playing. And I'm not flooding the timeline trying to convince anyone it's not a toxic hypercasino hellscape.

159k

Vision Pro mega-thread 1/5: My advice for designing and developing products for Vision Pro. This thread includes a basic overview of the platform, tools, porting apps, general product design, prototyping, perceptual design, business advice and more. Disclaimer: I’m not an Apple representative. This is my personal opinion and does not contain non-public information. Overview: Apps on visionOS are organized into “scenes”, which are Windows, Volumes, and Spaces. Windows are a spatial version of what you’d see on a normal computer. They’re bounded rectangles of content that users surround themselves with. These may be windows from different apps or multiple windows from one app. Volumes are things like 3D objects, or small interactive scenes. Like a 3D map, or small game that’s not immersive. Spaces are fully immersive experiences where only one app is visible. That could be full of many Windows and Volumes from your app. Or like VR games where the system goes away and it's all custom content. You can think of visionOS itself like a Shared Space where apps coexist together and you have less control. Whereas Full Spaces give you the most control and immersiveness, but don’t coexist with other apps. Spaces have immersion styles: mixed, progressive, and full. Which defines how much or little of the real world you want the user to see. User Input: Users can look at the UI and pinch like the demo videos show. But you can also reach out and tap on windows directly, sort of like it’s actually a floating iPad. Or use a bluetooth trackpad or video game controller. You can also look and speak in search bars, but that’s disabled by default for some reason on existing iPad and iOS apps running on Vision Pro. There’s also a Dwell Control for eyes-only input, but that’s really an accessibility feature. For a simple dev approach, your app can just use events like a TapGesture. In this case, you won't need to worry about where these events originate from. Spatial Audio: Vision Pro has an advanced spatial audio system that makes sounds seem like they’re really in the room by considering the size and materials in your room. Using subtle sounds for UI interaction and taking advantage of sound design for immersive experiences is going to be really important. Make sure to take this topic seriously. Development: If you want to build something that works between Vision Pro, iPad, and iOS, you'll be operating within the Apple dev ecosystem, using tools like XCode and SwiftUI. However, if your goal is to create a fully immersive VR experience for Vision Pro that also works on other headsets like Meta's Quest or PlayStation VR, you have to use Unity. Apple Tools: For Apple’s ecosystem, you’ll use SwiftUI to create the UI the user sees and the overall content of your app. RealityKit is the 3D rendering engine that handles materials, 3D objects, and light simulations. You’ll use ARKit for advanced scene understanding. Like if you want someone to throw virtual darts and have them collide with their real wall, or do advanced things with hand tracking. But those rich AR features are only available in Full Spaces. There’s also Reality Composer Pro which is a 3D content editor that lets you drag things around a 3D scene and make media rich Spaces or Volumes. It’s like Diet-Unity that’s built specifically for this development stack. One cool thing with Reality Composer is that it’s already full of assets, materials, and animations. That helps developers who aren’t artists build something quickly and should help to create a more unified look and feel to everything built with the tool. Pros and cons to that product decision, but overall it should be helpful. Existing iOS Apps: If you're bringing an iPad or iOS app over, it will probably work unmodified as a Window in the Shared Space. If your app supports both iPad and iPhone, it’ll look like the iPad version. You can use the Ornament API to make little floating islands of UI in front of, or besides your app, to make it feel more spatial. But that’s not something all existing apps get automatically. Ironically, if your app is using a lot of ARKit features, you’ll likely need to ‘reimagine’ it significantly as ARKit has been upgraded a lot. If you’re excited about building something new for Vision Pro, my personal opinion is that you should prioritize how your app will provide value across iPad and iOS too. Otherwise you're losing out on hundreds of millions of users. Unity: You can build to Vision Pro with the Unity game engine, which is a massive topic. Again, you need to use Unity if you’re building to Vision Pro as well as a Meta headset like the Quest or PSVR. Unity supports building Bounded Volumes for the Shared Space which exist alongside native Vision Pro content. And Unbounded Volumes, for immersive content that may leverage advanced AR features. Finally you can also build more VR-like apps which give you more control over rendering but seem to lack support for AR Kit scene understanding like plane detection. The Volume approach gives RealityKit more control over rendering, so you have to use Unity’s PolySpatial tool to convert materials, shaders, and other features. Unity support for Vision Pro allows for tons of interactions you’d expect to see in VR, like teleporting to a new location or picking up and throwing virtual objects.

1M

People with Innovator archetype

The Innovator

𝗪𝗲𝗯𝟯 𝗲𝘅𝗽𝗹𝗼𝗿𝗲𝗿, 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴&𝘀𝗵𝗮𝗿𝗶𝗻𝗴 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀,𝗶𝗳 𝘆𝗼𝘂 𝗱𝗼𝗻'𝘁 𝗯𝗲𝗹𝗶𝗲𝘃𝗲 𝗶𝗻 Sui 𝘆𝗼𝘂 𝘄𝗼𝗻'𝘁 𝗯𝗲 𝗿𝗶𝗰𝗵 | 👾🛡️

4k following5k followers
The Innovator

Solutionist. Astrophysics PhD and scientific software engineer. Dad of 2. Let’s colonise the solar system together. Please visit the link below to read my blog:

2k following65k followers
The Innovator

Engineering Manager @Meta Reality Labs | prev @FuboTV | AI, Spatial Computing & Robotics news weekly - visionquest.beehiiv.com

610 following604 followers
The Innovator

@0G_labs Changed me / web3 enthusiast / 0Gurus @0G_labs / Never Give up

4k following2k followers
The Innovator

Web3 to explore before i sleep :)

2k following2k followers
The Innovator

产品人 | 研究Ai是工作也是爱好|探索被动收入 |专注于Ai实践,搁这聊聊Ai

370 following768 followers
The Innovator

xyus 21 | full stack dev | pimary go and ts | working as an ai engineer at a startup | trying to get better at mathematics

77 following28 followers
The Innovator

I write here sometimes | @contrary, previously @warpdotdev

1k following528 followers
The Innovator

chief get-shit-done officer @huggingface | F1 fan | Here for @at_sofdog’s wisdom | *opinions my own

375 following33k followers
The Innovator

Chief Technology Officer @IcehouseVenture Advocate of design thinking & equity crowdfunding. Coffee blogger, ski instructor & business author: amzn.to/1P7AHEl

6k following6k followers
The Innovator

manipulating waveforms | music • audio • dev • ai | building open source apps for music producers @soniqaudio_

615 following732 followers
The Innovator

Building the future of golf instruction and how we interface with our golf clubs. Founder of @stayhandsy

722 following352 followers

Explore Related Archetypes

If you enjoy the innovator profiles, you might also like these personality types:

Supercharge your 𝕏 game,
Grow with SuperX!

Get Started for Free