Virtual assistants and the future of AI interaction
H&M announced last month they’ll be using digital clones of 30 fashion models in their next ad campaign. These aren’t basic avatars, they’re proper AI-powered versions of real people.
I'm not exactly sure what H&M plans to do with these clones, but I talked about this with a startup founder over a year ago. We imagined digital clones as personal LLMs we carry around, with a digital version of ourselves stored in the cloud. The idea was to have a small device running an LLM, linked to our public ID, broadcasting it using something like BLE or an RFID tag.
This tiny computer you carry would talk to an AI-powered receiver which recognises you using your private key and digital signatures. That’s how blockchain helps protect your identity. Receivers in shops, restaurants or venues around you would pick it up and personalise things instantly, whether you’re shopping, meeting friends or grabbing a coffee.
According to Forbes Customer Service & CX Research, 70% of customers said a personalised experience, where the employee knows their history with the company, like past purchases, buying patterns, support calls and more, is important.
For example, you walk into H&M, and your device already knows your style, your size, and what you’re looking for. It shares any relevant info with the store, and a virtual assistant recommends clothes based on your needs and budget. Virtual assistants could be apps that connect to your device, or AI-generated clones of models that show up on your phone or nearby TV. The shop could even show an AI clone of yourself on a smart TV if you want to see how something would look.
Now imagine controlling that experience with your voice, asking the shop’s virtual assistant to display different outfits on the screens around you.
Interaction design needs to move beyond the screen. Imagine you’re wearing your AirPods and you’re on a call, but not with a friend, with an AI assistant. Audio computers don’t need touchscreens or buttons. They listen, understand and respond using a human-like voice. If you’ve ever used ChatGPT with a text-to-speech app, you know what I’m talking about.
That changes everything. Words can evoke strong emotions and shape how people feel about a situation. UX designers are no longer just designing layouts or icons. They’re designing timing, tone and intent. They have to think about how sound fits into real-life situations. These computers need to be aware of context, where you are, what you’re doing, and whether it’s even a good time to speak.
This future doesn’t feel that far off anymore. In fact, the last few weeks have made it feel like it’s already starting. Sam Altman said:
“Our AI models are ahead of the products we’ve built with them. There’s this huge gap between what models can do and what’s actually out there. That gap is a massive opportunity for startups.”
Maybe Sam’s not just talking about software. Maybe that “massive opportunity” is in hardware too, and that’s exactly what OpenAI is betting on. Here are two things that point in that direction:
Apple launched Apple Intelligence: It’s now rolling out on some iPhones. It’s all happening on-device.
OpenAI bought IO, Jony Ive’s AI hardware startup: Jony Ive teamed up with Sam Altman to build new kinds of devices, made specifically for AI. Jony’s the same person who designed the iPhone, iMac and iPod.
That’s important for a couple of reasons. Right now, OpenAI lives inside other people’s devices: phones, laptops, browsers. That makes them dependent on Apple, Google, and all the rules that come with it, for example: App Store fees, restrictions, privacy stuff. If they want to build something new and exciting, they can’t keep living in someone else’s house. They need their own platform.
But before building a platform, they need to build the environment that makes their platform work. Apple, for example, had to push mobile carriers to support visual voicemail, improve data speeds and adapt to new usage patterns. Tesla also built charging networks, battery supply chains and energy products. As Elon Musk once said: if you’re building something that changes behaviour or expectations, you need to update or create the systems around it too. Products and infrastructure go hand in hand if you want (1) adoption at scale and (2) to stay ahead of the competition.
At one point, I thought OpenAI had acquired IO to build a bridge to Apple, to compete with Google and Microsoft. But now it looks like they’re sailing off, finding a new island and building their own city, with Jony Ive as the architect and Sam Altman as the mayor.
What’s becoming clear is that the AI race isn’t just about building the smartest models, it's about building the platform that powers the next generation of AI devices and apps.
Sam and Jony believe today’s devices are old tech trying to keep up with something completely new. That they are not made for how we use AI. So they want to design hardware from scratch, something built around AI, not apps. They also brought in about 50-60 top engineers and designers from IO (a lot of them ex-Apple), so OpenAI now has a proper hardware team.
OpenAI clearly doesn’t want to stop with chatbots. They’re going for the next big thing. Virtual assistants, AI-native devices and personalised experiences. It’s all starting to come together.
Let’s not forget that the last time Jony Ive partnered with someone who wore a black turtleneck and talked about changing the world, they gave us the iPhone. 🙌
Sources: