
Jony Ive, the legendary designer behind Apple’s iconic products (iPhone, iPad, Mac, etc.), has long been associated with shaping how we physically engage with technology. Now, he’s turned to a new frontier: AI-first devices. Recently at OpenAI’s DevDay conference, Ive revealed that he and his team are juggling 15 to 20 different ideas for potential AI devices — a clear signal that OpenAI’s ambitions in hardware are deeper and broader than many had expected.
This revelation builds on an ongoing transition: OpenAI acquired Ive’s AI-hardware startup, io, in a multibillion-dollar deal, folding its engineering and product team more tightly into OpenAI’s ecosystem. But the project is still shrouded in mystery: What kinds of devices are being considered? How will they depart from our current paradigm of screens and touch? And what might this mean for the future of everyday computing?
Let’s dive into what we know so far, what Ive’s comments suggest, and why this could mark a turning point in how humans interact with AI.
The Foundations: io, OpenAI, and the Acquisition
Before we explore the possible devices, it’s worth setting the stage: the strategic acquisition and blending of design and AI.
- In May 2025, OpenAI officially acquired io Products Inc., Ive’s hardware venture. The deal reportedly valued io at around $6.5 billion and transferred its engineering teams into OpenAI.
- Post-acquisition, Ive and his design collective, LoveFrom, remain independent but now assume deep creative and design responsibilities across OpenAI’s ventures.
- Ive and Sam Altman (OpenAI’s CEO) have publicly framed this as a mission to build a “family of devices” that would help shift how people interact with AI—less about screens, more about presence, context, and intuition.
- The acquisition is OpenAI’s largest thus far, underscoring how seriously the company is taking hardware as an integral piece of its AI strategy.
- However, the journey isn’t frictionless: a trademark dispute with a startup called Iyo (IYO) forced OpenAI to temporarily pull public marketing references to the “IO” brand.
In short: Ive + io’s hardware ambition is now inseparable from OpenAI’s future. The real question: Which 15–20 ideas survive to become products?
What We Know (and What We Don’t)
Because of non-disclosure, secrecy, and legal constraints, the public has only glimpses of what these future devices might (or might not) be. But those glimpses are provocative.
Non-Wearables & Screenless Devices
- According to court filings from io, the “first” AI device will not be a wearable or an in-ear device.
- The device is likely to be screenless or minimally screen-dependent, and act more like a contextual “assistant” device that communicates via voice or ambient sensing.
- It’s envisioned as a new class of device—not a phone, not glasses, not a smartwatch—but something entirely different.
- Reports describe it as pocket-sized, context-aware, and capable of connecting to existing devices (phones, computers) to augment and integrate workflows.
- Some speculative mockups evoke a neckworn “shuffle-like” device, but that remains unconfirmed.
Timeline, Focus & Constraints
- The product is not expected to launch until at least 2026, with design and engineering still in active development.
- In interviews, Ive has said the biggest challenge is focus: with 15–20 compelling product ideas, picking the ones that truly deserve execution is nontrivial.
- He also expressed criticism toward current smartphones and tablets, hinting that their design decisions have fueled anxiety, distraction, and a strained relationship with technology.
- Ive has cited a desire for devices that feel inevitable and obvious in their design—so much that the user wonders how we ever lived without them.
- OpenAI’s CFO, Sarah Friar, has commented that the new hardware has the potential to phase out typing or texting entirely, pushing toward a multimodal future (hearing, seeing, speaking).
- Meanwhile, there is internal pressure: they are reportedly aiming to deliver 100 million AI devices faster than any prior new hardware rollout.
The 15–20 AI Ideas: What Might They Be?
Ive’s revelation that there are so many viable device concepts opens up fascinating speculation—and some real clues. Below, I outline plausible categories (and tradeoffs they might entail).
| Device Concept | Key Features / Use Cases | Constraints & Challenges |
| Ambient AI Companion | A context-aware device that “listens and helps” without screens. | Privacy concerns; always-on sensing; edge vs. cloud compute tradeoff |
| AI Earpiece / Hearable | Conversation assistant, translation, noise adaptation. | They already ruled out in-ear designs; heat, battery, miniaturization |
| AI Pen / Stylus | Writes/gestures with intelligence, acts as a smart tool. | Surface recognition, latency, form factor, charging |
| AI Necklace / Pendant | Worn passively, communicates via voice and minimal UI. | Usability, fashion/comfort, audio directionality |
| Desk Companion / Hub | A “third core device” on your desk, ambient AI interface. | Does it replace screens? Or complement them? |
| Home Sensor Device | AI aware of environment: sound, movement, emotions. | Security, local vs cloud processing, context interpretation |
| AR Glasses / Smart Glass | Visual overlay, AR interface, voice interaction. | They ruled out “wearables” for now; optics, weight, battery |
| AI Badge / Clip | Small clip-on that listens, senses, interacts. | Mics, orientation, user acceptance, social norms |
| Modular Add-on for Phones | AI extension module that docks with your phone. | Seamless integration, component cost |
| Communication Translator Device | Language translation in real time. | Latency, translation accuracy, connectivity |
| Health / Mood Wearable (non-in-ear) | Noninvasive AI that tracks emotional state, vitals. | Sensor accuracy, privacy, hardware calibration |
| Ambient Display Device | Minimal display (e-ink, hologram) with voice-first behavior. | Balancing minimal UI with necessary feedback |
| Smart Jewelry / Jewelry AI | Blends fashion and function, subtle AI cues. | Design constraints, battery, aesthetics |
| Edge AI Module / Plug | Small plug-in hardware that adds AI to other systems. | Compatibility, performance, ecosystem support |
| Adaptive AI Remote / Control | A universal remote infused with AI. | Interface design, context awareness, simplicity |
| AI Task Assistant Device | Purpose-specific units (e.g. meeting assistant, writing companion) | Niche appeal vs general-purpose tradeoff |
| Gesture / Spatial AI Device | Controls through air gestures, spatial cues. | Reliability, UX learning curve |
Of these, some seem less likely (e.g. in-ear or classic wearables) based on what the io team publicly denied. Others, like ambient companions or smart pens, align closely with Ive’s design philosophy—tools that disappear into the background until needed.
Ive’s goal, as he’s stated, is not to create another luxury object, but a device that feels humane, calm, and emotionally intelligent, rather than alienating or anxiety-inducing.
In his own words, “If we can’t smile honestly, if it’s just another deeply serious exclusive thing, I think that would do us all a huge disservice.”
Why This Matters: The Stakes of AI Hardware
Why would OpenAI pour resources into physical devices, rather than stay software-first? Because to fully realize AI’s promise, you need new interfaces. Devices shaped around intelligence as first order—not as an app afterthought.
Here are some of the bigger implications:
1. Displacing the Smartphone Paradigm
Ive and Altman envision a world where AI mediates your relationship with technology—not by adding more screens, but by subtracting friction. The new devices may operate in the background, automatically anticipating user needs. If successful, the traditional smartphone might become an intermediary rather than the endpoint.
2. Reimagining Interaction Models
Keyboards, touchscreens and taps are legacy interfaces. The AI devices being conceived could shift us toward multimodal interaction—listening, speaking, seeing, sensing. The CFO of OpenAI has even floated that such hardware could phase out texting as a primary form of interaction.
3. The Ethics & Responsibility of Design
Ive has repeatedly acknowledged the “unintended consequences” of earlier technologies—the addiction loops, attention drain, social media harms. This project gives him a chance to “redress” that legacy. He wants designs that feel more humane and responsible—not just in appearance, but in how they shape behavior.
4. Competitive Threat to Apple & Others
Combining OpenAI’s AI with Ive’s design pedigree is a powerful statement. Apple has been comparatively slow to integrate generative AI into its products. The entry of OpenAI + Ive into hardware could disrupt incumbents, especially if they find a compelling device niche.
5. Manufacturing & Scale Pressure
Shipping hardware at scale is notoriously tricky. Ive and OpenAI reportedly plan to deliver 100 million devices faster than any new hardware in history. The supply chain, quality control, manufacturing logistics, and regulatory compliance will be nontrivial challenges—especially given how groundbreaking many of these ideas are.
Risks, Unknowns & Challenges
Of course, such ambition comes with many risks. Some of them:
- Privacy & surveillance concerns: Always-listening devices with sensors and cameras raise serious questions about data control, edge vs cloud AI, local processing, and user trust.
- Battery life & thermal constraints: Small form factors with ambient AI processing are notoriously difficult to power efficiently.
- User acceptance & adoption: It’s a steep ask to convince people to adopt a new device paradigm—especially one that’s not obviously a phone or computer.
- Software & integration complexity: For these devices to be valuable, they must integrate seamlessly into your ecosystem (phone, cloud, apps). Misalignment or friction would drive rejection.
- One-off vs general-purpose: Among 15–20 ideas, some may be too niche. Choosing the “right” ones is high-stakes.
- Competition & timing: Other companies are already exploring AI hardware (e.g. AI wearables, AR glasses). Speed matters.
- Legal & IP disputes: The IYO trademark dispute is already affecting how OpenAI can market its hardware. AP News+2TechCrunch+2
Despite these challenges, the fact that Ive is openly revealing the scale of ideation (15–20 ideas) is itself a sign of confidence in the ambition.
What to Watch For in the Coming Years
As Jony Ive and OpenAI pursue these 15–20 ideas, here’s what to keep an eye on:
- Prototypes and leaks — Hardware design tendrils leak early in supply chains and patent filings.
- Developer or partner signups — If OpenAI opens an ecosystem (APIs, accessory makers), that’s a strong signal of commitment.
- First product(s) launch — Likely in 2026, possibly a “companion” device to phones or desktops.
- Ecosystem integration — Deep synergy with ChatGPT, DALL·E, or other OpenAI models will be key to utility.
- Regulatory & privacy frameworks — How OpenAI designs data handling, anonymization, edge processing will affect user trust.
- Market reaction & adoption metrics — If early adoption is strong, it could validate the idea of AI-native hardware as a mainstream category.
- Competitive responses — Watch how Apple, Google, Meta or others respond—will they accelerate or pivot toward similar ideas?
Conclusion: A New Frontier for AI + Design
Jony Ive’s admission that he and OpenAI are concepting 15 to 20 AI devices is both bold and revealing. It signals that they’re not looking for one silver-bullet product, but a generational shift in how we think about intelligent devices.
If even one or two of those ideas land successfully—devices that dissolve rather than demand attention, that feel humane rather than intrusive—it could reshape how we live with AI. Rather than interacting through screens, perhaps we’ll interact around intelligence.
Time will tell which ideas survive the gauntlet of design, engineering, and market pressures. But the promise is high: a chance to rethink the legacy of the smartphone era and usher in a more ambient, human-centered future.





