4 AI features we were promised that still haven’t arrived

Key Takeaways

  • None of the Apple Intelligence features are available at the launch of iOS 18.
  • Tech giants like Google, Amazon, and OpenAI overpromise AI features, causing disappointment.
  • Some features take more than a year to become available.



The next generation of the iPhone has arrived, with the iPhone 16 , iPhone 16 Plus, iPhone 16 Pro , and iPhone 16 Pro Max officially announced at Apple’s Glowtime event . You’ll be able to get your hands on the new models from September 20. What you won’t get on September 20 is Apple Intelligence.

If you buy your new iPhone on day one, it will be completely devoid of the eagerly anticipated AI features such as a better Siri and custom emoji. Despite owning the very latest iPhone, none of the Apple Intelligence features will be available, because they’re simply not ready yet.

The first set of features won’t be released until the iOS 18.1 update, which should arrive in October, and even then, you’ll only get a few of the less impressive features. Image generation features aren’t likely to arrive until December, and the biggest improvements to Siri probably won’t arrive until 2025, by which point Apple will be getting ready to announce the amazing features that iOS 19 will definitely have at launch, honest.


Tech companies need to stop wowing us with what their AI features will be able to do at some undisclosed date in the future, and start being more realistic with our expectations. Announcing features that you aren’t able to release on time doesn’t serve anyone; consumers end up disappointed and the credibility of companies takes a hit. Apple isn’t the only company to tease us with the promise of amazing AI capabilities that have yet to appear. All the big players, including Amazon, Google, and OpenAI, the creator of ChatGPT , have all done the same. Here are four of the worst examples of tech companies offering AI features that they can’t then deliver on time.

1 Some Apple Intelligence features won’t arrive until 2025

None of the AI features are available at the launch of iOS 18

Apple/Pocket-lint


It’s fair to say that Apple has been severely lagging behind its rivals when it comes to AI. The sudden explosion of AI chatbots seemed to take Apple by surprise, and they’ve been playing catch up ever since. There are plenty of Android phones that already have AI features built in, but iOS 18 and the new iPhone 16 models are the first time that this has been the case for the iPhone.

Apple showcased a whole range of AI features that would be coming to the iPhone at its WWCD24 event back in June, which the company dubbed Apple Intelligence . Some of the features, such as significant improvements to Siri, generated genuine excitement about how AI could make using your iPhone easier and offer exciting new abilities.

Not a single one of the Apple Intelligence features is available at launch.


Not a single one of the Apple Intelligence features is available at launch. Not ONE. The first features will arrive in the iOS 18.1 update which is expected to appear in October, but even then, the feature list will be minimal. Virtually the only Siri upgrade we will see this year is a different animation when Siri is active.

December should see iOS 18.2 arrive, with additional image generation features such as Image Playground and the ability to create your own custom emoji. The best Siri features, however, such as the ability to take information directly from the current screen, contextual understanding of requests, and the ability to perform actions across apps, will probably fail to arrive until 2025. It could genuinely be the case that the full features of Apple Intelligence will only arrive a short while before WWDC25 comes around with the next set of promises that are hard to keep.


2 Multimodal ChatGPT looks amazing, but we’re still waiting for the simplest features

ChatGPT Advanced Voice Mode is still in a very limited alpha

ChatGPT app Voice Mode on iPhone

ChatGPT is one of the biggest names in the world of AI chatbots, so much so that Apple has even baked ChatGPT into iOS 18 , although (surprise, surprise) this feature won’t arrive until the end of the year.

OpenAI, the company behind ChatGPT, is equally guilty of getting users excited about what upcoming versions of its products will be able to do, only to fail to deliver on those promises. Back in May, OpenAI held its Spring Update event, in which its latest model, GPT-4o , was introduced. The new ChatGPT model was available not long after the event, allowing people to start using it. So far, so good.


However, the release of GPT-4o was only able to accept text and image input. OpenAI said that voice and video would be rolling out “in the coming weeks”. This understandably got people excited, because the voice and video features look seriously impressive. The Advanced Voice Mode will allow you to have more natural conversations using your voice, with ChatGPT responding almost instantly. You can interrupt ChatGPT just by starting to talk again, and you can even get ChatGPT to talk at different speeds or with different tones of voice. It’s also possible to use Advanced Voice Mode for real-time translation.


The video capabilities were even more impressive. Demos show people effectively having a video chat with ChatGPT, in which the AI chatbot is able to see everything that’s on your screen. You can point your camera at your pet, for example, and ask ChatGPT to tell you what breed it is, or scan along your bookshelf and ask ChatGPT to make a list of all the books. It’s a significant step-up in what AI chatbots can do and would make the app significantly more powerful and useful.

And yet, here we are, four months later, and despite the features supposedly rolling out “in the coming weeks”, voice and video features are still not available to most users. A very limited rollout of the Advanced Voice Mode has begun for some ChatGPT Plus subscribers, but the vast majority of paid and free users still don’t have access to the feature. The app currently states that the feature will be available to all Plus users by the end of the fall, but I’m not holding my breath.


The impressive video features are even further away, with no indication that these will be rolling out any time soon. It’s not unfeasible that we could go a full year from the initial announcement, and still not have the full capabilities that were showcased.

3 Google Gemini’s incredible demo is still a long way from becoming reality

Even minor features are rolling out at a slow pace

Google Gemini

Google/ Pocket-lint

Google is another company that has promised more than it has been able to deliver so far. For example, in October 2023, Google announced a feature that made the ridiculous “enhance” trope from sci-fi shows and movies a reality. We’ve all seen films where there’s a grainy image of a crime scene, and the protagonist shouts “zoom in on that car” followed by “enhance!” and the pixelated image of the car miraculously turns into a perfect high-resolution image of the car’s license plate.


Google Gemini running on Pixel 9

Zoom Enhance makes this a reality by using AI to fill in the gaps between pixels when zooming in on an image. Something that was previously science fiction is now (kind of) a reality. Except the feature didn’t arrive until August this year, ten months after it was announced. It barely scraped onto the current generation of phones before the new models were released.

This isn’t the only example of Google showcasing AI features that aren’t anywhere close to making it into the real world. At the Google I/O event in May, the company showed a video that demonstrated the abilities of a universal AI agent called Project Astra. The video showed features that were remarkably similar to the as-yet-unreleased video capabilities of GPT-4o.


Companies need to stop telling us what they will be able to do eventually and start delivering stuff that we can use right now.

Using a live video feed on a phone, it was possible to walk around and point the phone camera at a screen full of code to get the chatbot to explain what the code does, or point the phone out of the window and ask what location you’re in. After asking several questions about visible objects, the user was able to ask where the chatbot saw their glasses, and the chatbot was able to describe their location.

Once again, this is seriously impressive stuff, and would make AI chatbots so much more useful. However, Google isn’t even talking about when this feature might be available; it’s purely a prototype at this stage. While it’s very impressive, it’s utterly useless until the point that it’s actually available for use. Companies need to stop telling us what they will be able to do eventually and start delivering stuff that we can use right now.


4 Amazon’s updated Alexa is still not here after a full year

It may have to turn to other AI companies for help

An Echo Dot with an orange ring sits on a wooden surface

Back in September 2023 (that’s right, an entire year ago), Amazon gave us a preview of a new AI-powered Alexa which would be the biggest upgrade to the voice assistant since its release.

Using the power of generative AI, the new Alexa would be able to hold more natural conversations, without you having to use the “Alexa” wake word in front of everything you say. The new Alexa would also have much lower latency, so that talking to Alexa would feel like talking to a real person, rather than having to wait for a response. The new Alexa would even use the camera in some Echo devices to pick up on non-verbal cues as well as what is being said.


Unfortunately, the new Alexa has still not arrived, with reports suggesting that the feature may finally arrive in October, more than a full year after it was announced.

The new Alexa would also make controlling your smart home much easier. You’d be able to use complex commands such as “turn off the lights in the living room, close the blinds in the study, and lock the front door” rather than having to issue each command separately. You’d even be able to build complex routines just by saying something such as “Every weekday at seven in the morning, turn on my bedroom light to a warm color temperature, power on my coffee machine, open the blinds in the living room, and read my morning news briefing” and the entire routine would be created for you.


Unfortunately, the new Alexa has still not arrived, with reports suggesting that the feature may finally arrive in October , more than a full year after it was announced. It gets worse, however. Further reports suggest that Amazon was unable to achieve the results it wanted using its own AI models, and that the new Alexa, which will require a paid subscription, will be powered in part by Anthropic’s Claude AI model. It means that after waiting more than a year, we won’t even get the tech that we were promised, but rather something that’s been reworked to use another AI model instead.

Trending Products

.

LooshShop
Logo
Register New Account
Compare items
  • Total (0)
Compare
Shopping cart