Users of iPhones will doubtless have been subjected to its autocorrect’s frustrating habit of proposing out-of-context words as you type, disappearing correct suggestions forever if you type one more character, or mis-correcting entire sentences. It appears the problem has worsened since AI was introduced into the equation.
The Guardian’s “Ducking annoying: why has iPhone’s autocorrect function gone haywire?” explains that after the release of iOS 26, many users complain that the autocorrect feature seems to have a mind of its own.
It’s not unusual for algorithms to fail, but in the context of a smartphone, they literally should know better. Think about it: we type thousands of words a year, our phones know which words we use, how often, in what combination and based on which grammatical constructions… shouldn’t the iPhone’s autocorrect be among the most refined features ever created? And if it is not, why not?
The explanation seems to be that Apple has replaced its old n-gram based autocorrect models with transformer-like language models that run directly on the device. More powerful, yes, but also much more opaque and more difficult to debug. As one of the pioneers of self-correction explains, failures are no longer simple erroneous rules, but problems of interpretation of the context.
Here we enter another interesting layer: the custom smart model. It stands to reason that a phone you type on so often could learn from you. It could recognize your vocabulary, your expressions, your frequent mistakes, and offer almost invisible and accurate corrections. It could incorporate a small language model adapted to the user, without sending all the data to the cloud, thus preserving privacy. Studies of small models indicate precisely that they are sufficiently powerful, inherently more suitable and necessarily more economical for this type of specialized tasks.
So what’s the problem? Some hypotheses:
- Privacy as an obstacle: Apple prioritizes that the model works “on-device” which limits the amount of data it can use, which probably reduces its accuracy
- The scale of the model vs. the resources: to operate without draining the battery or memory, the model must be compact; but increasingly frustrating fixes suggest that balance isn’t working properly
- The opacity of the algorithm: when a mistake occurs you cannot “see” why it did it, unlike a simple spelling change. That breaks user trust
- Expectations too high: In a world where we talk to AI on an increasingly regular basis, everything “automatic” is expected to be almost perfect. However, an autocorrect is ideally invisible, and failure makes it visible and annoying
This reminds me of what my students tell me when I discuss these topics: confidence vs. effectiveness. Every word someone writes is private, every day, even intimate. For the phone to fail at that level is a kind of functional betrayal. Not only is it uncomfortable, it’s also particularly frustrating.
The bigger question may be this: are we willing to accept an autocorrect that simply “doesn’t bother”? Because if you expect it to “learn from me”, to “understand me” or to “be my tool”, but instead you get an opaque regulation that sometimes acts like a troll and even gets you into trouble, the result is a loss of confidence. And in technology, trust is a more fragile capital than the phone screen.
iPhone autocorrect should be one of those discreet services that you don’t even realize you have… until it starts to fail. And when it fails, we discover two things: that the device knew much less about our writing than we thought, and that the promise of “personalized intelligence” is not enough to win over the user if it is not accompanied by transparency, control and predictability. If Apple or any other manufacturer wants their keyboards to “learn to type for us”, they will have to start by learning not to make so many mistakes.
—
This post was previously published on Enrique Dans’ blog.
***
You may also like these posts on The Good Men Project:
White Fragility: Talking to White People About Racism |
Escape the “Act Like a Man” Box |
The Lack of Gentle Platonic Touch in Men’s Lives is a Killer |
What We Talk About When We Talk About Men |
Subscribe to The Good Men Project Newsletter
(function($) {
window.fnames = [];
window.ftypes = [];
fnames[0]=’EMAIL’; ftypes[0]=’email’;
})(jQuery);
var $mcj = jQuery.noConflict(true);
If you believe in the work we are doing here at The Good Men Project, please join us as a Premium Member today.
All Premium Members get to view The Good Men Project with NO ADS.
Need more info? A complete list of benefits is here.
Photo credit: iStock
The post Autocorrect Gone Rogue: Why Your iPhone Keeps Getting It Wrong appeared first on The Good Men Project.


White Fragility: Talking to White People About Racism
Escape the “Act Like a Man” Box
The Lack of Gentle Platonic Touch in Men’s Lives is a Killer
What We Talk About When We Talk About Men