hckrnws
Apple patches decade-old iOS zero-day, possibly exploited by commercial spyware
by beardyw
So the exploiters have deprecated that version of spyware and moved on I see. This has been the case every other time. The state actors realize that there's too many fingers in the pie (every other nation has caught on), the exploit is leaked and patched. Meanwhile, all actors have moved on to something even better.
Remember when Apple touted the security platform all-up and a short-time later we learned that an adversary could SMS you and pwn your phone without so much as a link to be clicked.
KSIMET: 2020, FORCEDENTRY: 2021, PWNYOURHOME, FINDMYPWN: 2022, BLASTPASS: 2023
Each time NSO had the next chain ready prior to patch.
I recall working at a lab a decade ago where we were touting full end-to-end exploit chain on the same day that the target product was announcing full end-to-end encryption -- that we could bypass with a click.
It's worth doing (Apple patching) but a reminder that you are never safe from a determined adversary.
How much do you think Lockdown Mode + MIE/eMTE helps? Do you believe state actors work with manufacturers to find/introduce new attack vectors?
My iOS devices have been repeatedly breached over the last few years, even with Lockdown mode and restrictive (no iCloud, Siri, Facetime, AirDrop ) MDM policy via Apple Configurator. Since moving to 2025 iPad Pro with MIE/eMTE and Apple (not Broadcom & Qualcomm) radio basebands, it has been relatively peaceful. Until the last couple of weeks, maybe due to leakage of this zero day and PoC as iOS 26.3 was being tested.
Are you a person of high interest? I was under the impression that these sorts of breaches only happen to journalists, state officials, etc.
Who knows? Does HN count as journalism :)
I would happily pay Apple an annual subscription fee to run iOS N-1 with backported security fixes from iOS N, along with the ability to restore local data backups to supervised devices (which currently requires at least 2 devices, one for golden image capture and one for restore, i.e. "enterprise" use case). I accept that Apple devices will be compromised (keep valuable data elsewhere), but I want fast detection and restore for availability.
GrapheneOS on Pixel and Pixel Tablet have been anomaly free, but Android tablet usability is << Apple iPad Pro.
USB with custom Debian Live ISO booted into RAM is useful for generic terminal or web browsing.
could you please elaborate on how you determine that your devices have been breached? e.g. referring to "anomaly free" makes it sound like you might witnessing non-security related unexpected behaviour? sorry for the doubt, i'm curious
First idea if great honestly - lots of vendors do this. I use Firefox long term stable and Chrome offers this for enterprise customers. Windows even offers multiple options of this (LTSC being the best by far).
Would also make a great corporate / government product - I doubt they care about charging the average consumer for such a subscription (not enough revenue) but I can see risk averse businesses and especially government sectors being interested.
You can already do that?
Apple offers that to all customers who open up an enterprise account and direct billing line.
You can already do that?
Apple offers that to all customers who open up an enterprise account and direct billing line
What's the name of the feature for Apple Enterprise customers that would allow iOS 18 to be installed on a newly provisioned device today?Downgrades are not supported by Apple Business Manager MDM and there's no reference to downgrades on the Enterprise page, https://www.apple.com/business/enterprise/
How can you tell that you were breached?
Presence of one or more: unexpected outbound traffic observed via Ethernet, increased battery consumption, interactive response glitching, display anomalies ... and their absence after hard reset key sequence to evict non-persistent malware. Then log review.
What are examples of logs that you're considering IOCs? The picture you are painting is basically that most everyone is already compromised most of the time, which is ... hard to swallow.
I reported the experience on my devices, which said nothing about "everyone".
The point was that your experiences as written seem easily explained and common, hence "everyone".
That doesn't sound convincing without context for suspecting you've been targeted and infected with malware through a highly sophisticated exploit chain.
How did you link that traffic to malicious activity?
By minimizing apps on device, blocking all traffic to Apple 17.x, using Charles Proxy (and NetGuard on Android) to allowlist IP/port for the remaining apps at the router level, and then manually inspecting all other network activity from the device. Also the disappearance of said traffic after hard-reset.
Sometimes there were anomalies in app logs (iOS Settings - Analytics) or sysdiagnose logs. Sadly iOS 26 started deleting logs that have been used in the past to look for IOCs.
How did you determine that a connection was malicious? Modern apps are noisy with all of the telemetry and ad traffic, and that includes a fair amount of background activity. If all you’re seeing are connections to AWS, GCP, etc. it’s highly unlikely that it’s a compromise.
Similarly, when you talk about it going away after a reset that seems more like normal app activity stopping until you restart the app.
Are you sure whatever you have configured in the MDM profile or one of these apps like Charles Proxy is not the source of the traffic?
Are you using a simple config profile on iOS to redirect DNS and if so how are you generating it ? Full MDM or what are you adding to the profile ?
Traffic was monitored on a physical ethernet cable via USB ethernet adapter to iOS device.
Charles Proxy was only used to time-associate manual application launch with attempts to reach destination hostnames and ports, to allowlist those on the separate physical router. If there was an open question about an app being a potential source of unexpected packets, the app was offloaded (data stayed on device, but app cannot be started).
MDM was not used to redirect DNS, only toggling features off in Apple Configurator.
To where?
Usually a generic cloud provider, not unique, identifying or stable.
So how did you identify this as a breach? I'm struggling to find this credible, and you've yet to provide specifics.
Right now it comes across as "just enough knowledge to be dangerous"-levels, meaning: you've seen things, don't understand those things, and draw an unfounded conclusion.
Feel free to provide specifics, like log entry lines, that show this breach.
Please feel free to ignore this sub-thread. I'm merely happy that Apple finally shipped an iPad that would last (for me! no claims about anyone else!) more than a few weeks without falling over.
To learn iOS forensics, try Corellium iPhone emulated VMs that are available to security researchers, the open-source QEMU emulation of iPhone 11 [1] where iOS behavior can be observed directly, paid training [2] on iOS forensics, or enter keywords from that course outline into web search/LLM for a crash course.
[1] https://news.ycombinator.com/item?id=44258670
[2] https://ringzer0.training/countermeasure25-apple-ios-forensi...
I worked at Corellium tracking sophisticated threats. Nothing you’ve posted is indicative of a compromise. If you’re convinced I’d be happy to go through your IOCs and try to explain them to you.
Thanks. In this thread, I was trying to share a positive story about the recent iPad Pro _NOT_ exhibiting the many issues I observed over 5 years and multiple generations of iPhones and iPad Pros. If any new issues surface, I'll archive immutable logs for others to review.
I think this just further highlights my credibility point.
With the link I provided, a hacker can use iOS emulated in QEMU for:
• Restore / Boot
• Software rendering
• Kernel and userspace debugging
• Pairing with the host
• Serial / SSH access
• Multitouch
• Network
• Install and run any arbitrary IPA
Unlike a locked-down physical Apple device. It's a good starting point.I'm much more convinced that you're competent in the field of forensics. But I still don't think suspicious network traffic can be categorically defined as a 'device breach.'
For all you know, the traffic you've observed and deem malicious could just as well have been destined for Apple servers.
Apple traffic goes to 17.0.0.0/8 + CDNs aliased to .apple.com, which my egress router blocks except for Apple-documented endpoints for notifications and software update, https://support.apple.com/en-us/101555
appldnld.apple.com configuration.apple.com gdmf.apple.com gg.apple.com gs.apple.com ig.apple.com mesu.apple.com mesu.apple.com ns.itunes.apple.com oscdn.apple.com osrecovery.apple.com skl.apple.com swcdn.apple.com swdist.apple.com swdownload.apple.com swscan.apple.com updates.cdn-apple.com updates-http.cdn-apple.com xp.apple.com
There was no overlap between unexpected traffic and Apple CDN vendors.
'Apple-documented' being operative here.
True, perhaps OVH in Germany (one anomaly example) is an Apple vendor. No way to know.
Comment was deleted :(
They said upthread that they had blocked 17.0.0.0/8 ("Apple"), but maybe there are teams inside Apple that are somehow operating services outside of Apple's /8 in the name of Velocity? I kind of doubt it, though, because they don't seem like the kind of company that would allow for that kind of cowboying.
I don't doubt it in the slightest. Every corporate surveillance firm—I mean, third-party CDN in existence ostensibly operates in the name of 'velocity'.
Apple has used AWS and Cloudflare in the past, too, so it’s not like seeing that traffic is a reliable indicator of compromise.
LOL. Aren't you a little paranoid?
Just trying to use expensive tablets in peace. Eventually stopped buying new models due to breaches.
After a few years, bought the 2025 iPad Pro to see if MTE/eMTE would help, and it did.
Lol 'breaches'.
I agree with other posters that you seem to be capable of network level forensics, but you have said nothing to back up what you consider a device breach other than 'some cloud destined network traffic which disapears after a hard reset'.
In my experience of forensic reports, this link is tenuous at best and would not be considered evidence or even suspected breach based on that alone.
There’s no hard evidence that you’ve put forward that you’ve been breached.
Not understanding every bit of traffic from your device with hundreds of services and dozens of apps running is not evidence of a breach.
Have you found unsigned/unauthorized software? Have you traced traffic to a known malware collection endpoint? Have you recovered artifacts from malware?
Strong claims require strong evidence imo and this isn’t it.
[flagged]
I don't think that proves they've been breached. Are you sure your not just seeing keep alive traffic or something random you haven't taken into account ?
Sounds like it is time to drop Apple devices and move to Graphene.
From another comment - I switched phone to Pixel and it has worked well, with a separate profile for apps that require Google Play Services.
> GrapheneOS on Pixel and Pixel Tablet have been anomaly free, but Android tablet usability is << Apple iPad Pro.
iPad Pro with Magic Keyboard and 4:3 screen is an engineering marvel. The UX overhead of Pixel Tablet and inconsistency of Android apps made workflows slow or even impractical, so I eventually went back to iPad and accepted the cost/pain of re-imaging periodically, plus having a hot-spare device,
> restrictive (no iCloud, Siri, Facetime, AirDrop ) MDM policy via Apple Configurator
MDM? That doesn't surprise me. Do you want to know how _utterly_ trivial MDM is to bypass on Apple Silicon? This is the way I've done it multiple times (and I suspect there are others):
Monterey USB installer (or Configurator + IPSW)
Begin installation.
At the point of the reboot mid-installation, remove Internet access, or, more specifically, make sure the Mac cannot DNS resolve: iprofiles.apple.com, mdmenrollment.apple.com, deviceenrollment.apple.com.
Continue installation and complete.
Add 0.0.0.0 entries for these three hostnames to /etc/hosts (or just keep the above "null routed" at your DNS server/router.
Tada. That's it. I wish there was more to it.
You can now upgrade your Mac all the way to Tahoe 26.3 without complaint, problem, or it ever phoning home. Everything works. iCloud. Find My. It seems that the MDM enrollment check is only ever done at one point during install and then forgotten about.
Caveat: I didn't experiment too much, but it seems that some newer versions of macOS require some internet access to complete installation, for this reason or others, but I didn't even bother to validate, since I had a repeatable and tested solution.
Do most people even use MDM on laptops or desktops ? I see it mostly used on phones
Comment was deleted :(
Useful, thanks for the contribution to HN/LLM knowledge base!
It appears the iPhone Air and iPhone 16e are the only devices with the Apple radio basebands so far.
16e still uses a Broadcom chip for WiFi + Bluetooth, though. iPhone Air is currently the only iPhone that uses both Apple-designed baseband + WiFi/BT chips.
Appreciate the clarification.
+ iPad Pro.
> Do you believe state actors work with manufacturers to find/introduce new attack vectors?
Guaranteed. I find it hard to believe state actors will not attempt this.
Flash paper is king when it comes to secrets I guess.
They might but it’s currently easier to just find exploits.
Theoretical question. How much more secure will be a Linux device which uses phone as a dumb Internet provider.
Linux has few defenses against the compromise of individual programs leading to the whole system being compromised. If you stick to basic tools (command line) that you can fully trust, it might be somewhat resistant to these types of attacks. The kernel might be reasonably secure but in typical setups, any RCE in any program is a complete compromise.
Things like QubesOS can help, but it's quite high-effort to use and isn't compatible with any phone I know of.
Linux is swiss cheese and your dumb phone is probably full of zero days which will happily mitm you.
If you care about security, you should try Qubes OS.
There is one non-technical countermeasure that Apple seems unwilling to try: Apple could totally de-legitimize the secondary access market if they established a legal process for access their phones. If only shady governments require exploits, selling access to exploits could be criminalized.
It would not completely de-legitimize it. Maybe a government doesn't want anyone to know they are surveilling a suspect. But it definitely would reduce cash flow at commercial spyware companies, which could put some out of business.
We have a word for this: a backdoor. It wouldn't de-legitimize the secondary access market. It would just delegitimize Apple itself to the same level. Apple seems to care about its reputation as the defender of privacy, regardless of how true it is in practice, and providing that mechanism destroys it completely.
Your opinion is that Apple should have just handed over Jamal Khashoggi‘s information to the Saudi Arabian agents who were trying to kill him, because then Saudi Arabia wouldn’t have been incentivized to hack his phone? I think you’ll find most people’s priorities differ from yours.
As many people in this space have found out recently, there is no real thing as a non-shady government.
as a mobile dev this is a weird thing to internalize. you build your whole security model on "trust the platform" and there's not much you can do if the OS itself is compromised. you can encrypt at rest, minimize permissions, avoid caching sensitive data, but at some point you're just hoping the OS underneath you isn't pwned.
the KSIMET through BLASTPASS progression is sobering. it's basically a new chain every year.
Thanks for contributing to our increasing lack of security and anonymity.
Meh. It’s up to Apple to write secure software in the first place. Maybe if they spent more time on that instead of fucking over their UI in the name of something different, and less time virtue signalling, their shit would be more secure.
Yes because other operating systems never have a decade old vulnerability?
https://www.sysdig.com/blog/detecting-cve-2024-1086-the-deca...
And yes because their UI folks should be spending time on the kernel. What next? If Apple didn’t have so many people working at the Genius Bar they could use some of those people to fix security vulnerabilities?
Are you suggesting that money spent on marketing - to the extent that it doesn't actually increase market share/sales - couldn't be spent on hardening or vulnerability payouts, etc?
Apple doesn't have unlimited money. It all gets allocated somewhere. Allocating it in places that don't improve security or usability or increase sales is, in this sense, a wasted opportunity that could be more efficiently allocated elsewhere.
> Are you suggesting that money spent on marketing - to the extent that it doesn't actually increase market share/sales - couldn't be spent on hardening or vulnerability payouts, etc?
Yes?
Well Apple kind of does have unlimited money for all intents and purposes. It’s net income last year was $112 billion.
If Apple had unlimited money they’d just buy the exploit makers at whatever asking price. Or they’d set exploit bounties at a price guaranteed to outbid others etc.
No, just like any other company they don’t have unlimited money and my point stands.
Really? You don’t think Apple could “afford” to set aside $500 million dollars for instance to pay off exploit makers? Less than 0.5% of their profit? Or even $1 billion? Less than 1% of their profit?
Huh?
Ofc they could afford to, but they don’t. They could alo afford to if they had unlimited money, but in the latter case by definition they’d lose nothing by actually buying.
Given the absurdity of the scenario and its contrivance though I’m not sure what your point is. More money spent on security is good is my point. And if they had more money they’d have more money to spend on security. And if they didn’t spend money on dumb shit like virtue signaling then they’d have more money. That’s the reasoning.
My point is that it’s silly to say that Apple doesn’t have enough money left over after spending money on marketing to pay off people who find security vulnerabilities if they have $110 billion in profit after spending money on marketing.
If you had to spend 0.5% of your income for something in a year, would that adversely affect how you chose to spend the other 99.5%?
Is it not up to you to not write software that leads to people being killed?
Ok? Welcome to earth. We are a violent species. Sometimes people die violently. What’s your point?
Lawful killing is, by definition, legal. It’s also justified in certain situations.
Disagree? Cool, so don’t work for the police or Cellebrite lol, but don’t try to impose your idiosyncrasies on others.
If your ethics are “people die so I might as well partake in killing them” I suspect you haven’t really thought this through very thoroughly
My ethics are that certain people will die in certain circumstances and I’m okay with that. I also have no issues working on something that may result in a person’s death at a later stage. One example might be that if I worked on an automobile assembly line it might occur to me that the car I’m working on would at some point crash and the occupants be killed. But why would I care? There’s a chain of causation that you can surely understand, one that in this case would be broken many times before then (assuming I wasn’t negligent in assembling the car).
But again, your condescending tone proves my point. You and I don’t have the same values. That’s okay. But keep yours to yourself and I’ll keep mine to myself, right? That’s my point.
I totally agree, and it's basically theft that Apple simply doesn't have a standing offer to outbid anyone else for a security hole.
That said, we all get the same time on this earth. Spending your time helping various governments hurt or kill people fighting for democracy or similar is... a choice.
I don't think democracy is the panacea you seem to think it is, but that's another issue. Certainly, cracking software for governments and the police is no less legitimate an existence and occupation as, say, working for an NGO.
>It's worth doing (Apple patching) but a reminder that you are never safe from a determined adversary.
I hate these lines. Like yes NSA or Mossad could easily pwn you if they want. Canelo Alvarez could also easily beat your ass. Is he worth spending time to defend against also?
Yes, because Apple can do it at scale.
Yes. If vendors do not take this seriously, these capabilities trickle down to less sophisticated adversaries.
and if you point out that Apple's approach is security by obscurity with a dollop of PR, you get downvoted by fan bois.
Apple really need to open up so at very least 3rd parties can verify integrity of the system.
They shipped MTE on hundreds of millions of devices. Is that security by obscurity or PR?
Memory Tagging Extension is an Arm architectural feature, not an Apple invention. Apple integrated and productised it, which is good engineering. But citing MTE as proof that Apple’s model is inherently superior misses the point. It doesn’t address the closed trust model or lack of independent system verification.
Your claim wasn't about inherent superiority or who invented what, your claim was "that Apple's approach is security by obscurity with a dollop of PR." The fact that they deployed MTE on a wide scale, along with many other security technologies, shows that not to be true.
Shipping MTE doesn’t refute my point.
MTE is an Arm architectural feature. Apple integrated it, fine. That’s engineering work. But the implementation in Apple silicon and the allocator integration are closed and non-auditable. We have blog posts and marketing language, not independently verifiable source or hardware transparency.
So yes, they deploy mitigations. That doesn’t negate the fact that the trust model is opaque.
Hardening a class of memory bugs is not the same thing as opening the platform to scrutiny. Users still cannot independently verify kernel integrity, inspect enforcement logic, or audit allocator behaviour. Disclosure and validation remain vendor-controlled.
You’re treating ‘we shipped a mitigation’ as proof against ‘the system is closed and PR-heavy.’ Those are different axes.
"Security by obscurity" does not mean "closed." It specifically means that obscurity is a critical part of the security. That is, if you ever let anyone actually see what was going on, the whole system would fall to pieces. That is not the case here.
If what you meant to say was "the system is closed and PR-heavy," I won't argue with that. But that's a very different statement.
Meanwhile Apple made a choice to leave iOS 18 vulnerable on the devices that receive updates to iOS 26. If you want security, be ready to sacrifice UI usability.
If you set Liquid Glass to the more opaque mode in settings I find iOS usability to be fine now, and some non-flashy changes such as moving search bars to the bottom are good UX improvements.
The real stinker with Liquid Glass has been macOS. You get a half-baked version of the design that barely even looks good and hurts usability.
Still takes multiple taps to find something on a page in Safari.
You can restore the old UI by changing the “tabs” setting from “compact” to “top” or “bottom”.
This is, again, something you can fix in Settings
You can just type the text to find in the address bar — “find on page” will be the at the very bottom of the list of suggestions.
iOS 26 is a disaster on devices with 4GB RAM though, so I'm not upgrading my iPhone 13 Mini again (that was a traumatic few days).
Are you sure that wasn't just a beta thing?
Interesting. I haven't had any noticable problems on my 13 Mini.
What are you seeing?
Imagine running iOS 26 on an iPad Air 3 from 2019…
Apple released iOS 18.7.5:
18.7.3 and newer are not published for most devices that support them in order to coerce people to move to 26.x
That's terrible.
Available for: iPhone XS, iPhone XS Max, iPhone XR, iPad 7th generation
Comment was deleted :(
It's a rug-pull going against the tradition of supporting the most recent 2 OS versions until the autumn refresh simply to technofascistly force users onto 26 with an artificially-created Hobson's false choice between security and usability. This is bullshit.
decade-old vulns like this are why the 'you're not interesting enough to target' argument falls apart. commercial spyware democratized nation-state capabilities - now any mediocre threat actor with budget can buy into these exploits. the Pegasus stuff proved that pretty clearly. and yeah memory safety helps but the transition is slow - you've got this massive C/C++ codebase in iOS that's been accumulating bugs for 15+ years, and rewriting it all in Swift or safe-C is a multi-decade project. meanwhile every line of legacy code is a ticking time bomb. honestly think the bigger issue is detection - if you can't tell you've been pwned, memory safety doesn't matter much.
I’m pretty sure the dyld code involved was written in the last 5 years if not more recently than that
> the bigger issue is detection
Apple could do more for device security forensics.
Meanwhile, user app activity goes into "biome" files for theft by malware, https://bluecrewforensics.com/2022/03/07/ios-app-intents/
Whenever plugging a hole like this, the OS should kinda leave it “open” as a kind of honeypot and immediately show a warning to the user that some exploit was attempted. Granted, the malware will quickly adapt but you should at least give some users (like journalists or politicians) the insanely important information about them being targeted by some malicious group.
I wonder what the internal conversations are like around memory safety at Apple right now. Do people feel comfortable enough with Swift's performance to replace key things like dyld and the OS? Are there specific asks in place for that to happen? Is Rust on the table? Or does C and C++ continue to dominate in these spaces?
Apple is already working on a memory-safe C variant which is already used in iBoot and will be upstream LLVM soon: https://clang.llvm.org/docs/BoundsSafety.html
While not wholesale replacing it, there already is Swift in dyld: https://github.com/search?q=repo%3Aapple-oss-distributions%2...
This goes into dyld.framework, not dyld the linker
Oh great, so is this how Apple forces me to downgrade from iOS 18 to iOS 26?
That was my first thought. No backports for older devices?
So left to update to 26.3, device slows, battery life deteriorates and a new device needs to be ~~purchased~~ … errr rented.
Good that apple has a monopole else consumers would have a choice.
There is a choice. Sent from my GNU/Linux phone Librem 5.
That does universal copy and paste with my linux laptop? Airdrop with my android tablet?
I can copy something on my macbook and paste that on my iphone - nice feature. Or to my iPad. I’m a sucker for interconnected technology, no hassle with transferring data between my devices.
Sure there are alternatives, but none that provide such integration amongst diverse class of devices. That’s the true monopole they have - unfortunately.
KDEconnect over a VPN works extremely well for clipboard and file and notification sharing. You don't have to use KDE for it. It works with Linux and Android. It also doesn't require an account and accepting terms, so it is strictly superior.
Sounds like you value 'features' over privacy and security.
> That does universal copy and paste with my linux laptop? Airdrop with my android tablet?
To be fair this can be replicated with LocalSend, albeit not as slick UX wise.
That's a tradeoff you make yourself and in no way a monopoly.
> That does universal copy and paste with my linux laptop? Airdrop with my android tablet?
I didn't try, but yes: https://linuxphoneapps.org/services/kde-connect/
Ironically this is a security focused thread. The solution here isn’t to switch to a Linux phone, a platform that has absolutely atrocious security, especially compared to even stock iOS/Android. The only alternative that actually increases privacy and security is GrapheneOS. If one doesn’t want to buy a Pixel in order to have it, they can wait and see what the new OEM that will support GOS will be later this year before deciding if it’s worth waiting for in 2027.
You seem to forget that Android and Graphene are built on a Linux kernel.
I think generally when people refer to Linux phones they’re specifically referring to non-Android Linux phones.
Why do linux phones have worse security than android?
No good application sandboxing, far fewer security mitigations.
Is sandboxing needed when your applications aren't the store crapware though?
I mean it would be nice but it's not quite the same threats.
Comment was deleted :(
Only for a handful of devices.
What's never mentioned in posts like this is whether phones in lockdown mode were vulnerable too.
Outrageous that this isn't being patched in iOS 18. Genuinely shocked, and indefensible.
Apple has some of my favorite vulnerabilities, most notably GOTO Fail: https://www.imperialviolet.org/2014/02/22/applebug.html
No updates for ipados17. I guess my ipad pro 10.5 is finally a brick.
Feudalism says: buy new hardware, peasant.
I don't know what "equally annoying" would be for a company and its customers, i.e. a fair compromise. But we need a law requiring companies open source their hardware within X days of end of life support.
And somehow make sure these are meaningful updates. Not feature parity with new hardware, but security parity when it can be provided by a software only update.
Otherwise a company in effect takes back the property, without compensation.
The battery has very little capacity now, so I'm planning on buying a new iPad air with the M chip. It's really a game changer in terms of performance and efficiency.
Submit feedback (or radar equivalents) to Apple about the nasty rug-pull of not patching 18 on all devices. Don't expect a response however.
What does "zero-day" even meant?
> ... decade-old ...
> ... was exploited in the wild ...
> ... may have been part of an exploit chain....
The vulnerability has been present for more than a decade.
There is evidence that some people were aware and exploiting it.
Apple was unaware until right now that it existed, thus is a 'zero day' meaning an exploit that the outside world knows about but they don't.
I don’t see any evidence it was there for a decade
Meaning unknown to the public/vendor
Well whatever the zero means, it can't be the number of days that the bug has been present, generally. It should be expected that most zero-days concern a bug with a non-zero previous lifespan.
“Zero day” has meant different things over the years, but for the last couple-ish decades it’s meant “the number of days that the vendor has had to fix them” AKA “newly-known”.
It still weirds me out that a term w@r3z d00dz from the 90s coined is now a part of the mainstream IT security lexicon.
Right, I think the use of "0-day" as "stolen, unreleased software by software pirates" predates the current use.
The other commenter is right, there's a lot of overlap in the communities. It's strange to me that I was in the "field" a good 20 years before I ever thought it would be a career opportunity. This is not a complaint by any means. :-)
Consider that there's probably a large overlap between those groups
Did MIE/MTE on 2025 iPhones help to detect this longstanding zero day?
i wonder if this could be used to make a jailbreak possible :3
It's pretty unbeliveable that a zero-day can sit here this long. If one can exist, the likeliehood of more existing at all times is non-trivial.
Whether it's a walled garden of iOS, or relative openneds of Android, I don't think either can police everythign on anyone's behalf.
I'm not sure how organizations can secure any device ios or android if they can't track and control the network layer, period out of it, and there are zero carveouts for the OS itself around network traffic visibility.
> how organizations can secure any device ios or android if they can't track and control the network layer, period out of it, and there are zero carveouts for the OS itself around network traffic visibility.
The closest I've seen is an on-device VPN like Lockdown Privacy , but it can't block Apple bypassing the VPN.
https://lockdownprivacy.com/ | https://github.com/confirmedcode/Lockdown-iOS
Or the tiny CPU on the networking hardware chip
You cannot.
iOS is one problem, but it goes for every other device/server/desktop/appliance that you use.
You can take a lot of precautions, and mitigate some risk, and ensure that operations can continue even if something bad happens¹, but you cant ever "be safe".
¹ "" There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know "" (Often attributed to Donald Rumsfeld, though he did not originate the concept.)
Know what bad can happen is difficult.
How do you expect to catch this with network traffic analysis?
Previously: https://news.ycombinator.com/item?id=46979643
I wonder if Fil-C would have prevented this.
Doubtful, Apple is one of the largest advocates of safe C already.
I guess the fix is only for Tahoe?
Edit: I meant iOS 18
The zero-day mentioned in the article doesn't affect macOS.
But there were security updates for macOS 14 and macOS 15 released yesterday:
There's an update for Sequoia too.
But not for iOS 18, so this is a forced upgrade to the horrors of Liquid Glass.
Can’t wait to see how much battery it eats.
[flagged]
as in I now have to upgrade all my children's ancient iphones...?
I'd much rather not do that
You’d rather they not release updates to support them?
I'd rather they did so I don't have to upgrade
edit: my original post wasn't clear I see - I meant I don't want to ditch the phones they've got and hope Apple releases an update for ios 16
Ohhhh, I see what you mean now. I read that as you didn't want to upgrade the software, but you meant you didn't want to replace the hardware.
Yeah, that makes sense. I hope you get an update for them, too.
Why cant they just release security patches for older versions of iOS instead of forcing to upgrade version?
The exploit was always there, you just didn't know about it, but attackers might have. The only thing that changed is that you're now aware that there's a vulnerability.
And now everyone else is aware of it too... including anyone marginally above a scriptkiddie.
My suspicion is that. These "exploits" are planted by spy agencies.
They don't appear there organically.
This kind of mental model only works if you think of things as made huge shadowy blobs, not people.
dyld has one principal author, who would 100% quit and go to the press if he was told (by who?) to insert a back door. The whole org is composed of the same basic people as would be working on Linux or something. Are you imagining a mass of people in suits who learned how to do systems programming at the institute for evil?
Additionally, do you work in tech? You don’t think bugs appear organically? You don’t think creative exploitation of bugs is a thing?
dyld has several people working on it now AFAIK
I am not saying this one in particular.
Of course no one can admit it publicly.
But it is something that governments are known to proactively do.
You can get dirt on people a la Jeffrey Epstein. And use that to coerce them.
This vastly overstates both the competence of spy agencies and of software engineers in general. When it comes to memory unsafe code, the potential for exploits is nearly infinite.
> overstates both the competence of spy agencies
Stuxnet was pretty impressive: https://en.wikipedia.org/wiki/Stuxnet
It was also not a bug to be exploited.
It was a complicated product that many people worked in order to develop and took advantage of many pre-existing vulnerabilities as well knowledge of complex and niche systems in order to work.
Yeah, Stuxnet was the absolute worst of the worst the depths of its development we will likely truly never know. The cost of its development we will never truly know. It was an extremely highly, hyper targeted, advanced digital weapon. Nation states wouldn't even use this type of warfare against pedophiles.
Stuxnet was discovered because a bug was accidently introduced during an update [0]. So I think it speaks more to how vulnerabilities and bugs do appear organically. If an insanely sophisticated program built under incredibly high security and secrecy standards can accidently push an update introducing a bug, then why wouldn't it happen to Apple?
[0] https://repefs.wordpress.com/2025/04/09/a-comprehensive-anal...
Maybe sometimes? With how many bugs are normally found in very complex code, would a rational spy agency spend the money to add a few more? Doing so is its own type of black op, with plenty of ways to go wrong.
OTOH, how rational are spy agencies about such things?
Yes. Of course not all.
But some just happen to work too well.
But governments do have blatant back doors in chips & software.
Some suspect that Apple secretly backs some of these spyware services. I've heard rumors about graykey but only rumors. Thoughts?
>Some suspect ...
>I've heard rumors ...
So like, the comment you're replying to? This is just going in circles.
Open source wins... again.
Unfortunately it doesn’t actually
I am shocked to hear that over these years it was possibl to extract data from a locked iphone. (hardening mode off)
I trusted apple.
>I trusted apple.
To what? Write 100% bug free software? I don't think that's actually achievable, and expecting so is just setting yourself up for appointment. Apple does a better job than most other vendors except maybe GrapheneOS. Mainstream Android vendors are far worse. Here's Cellebrite Premium's support matrix from July 2024, for locked devices. iPhones are vulnerable after first unlock (AFU), but Androids are even worse. They can be hacked even if they have been shut down/rebooted.
https://grapheneos.social/system/media_attachments/files/112...
https://grapheneos.social/system/media_attachments/files/112...
https://grapheneos.social/system/media_attachments/files/112...
These links working for anyone? 403 for me
Updated the links. The original were from discuss.grapheneos.org but it looks like they don't like hot-linking.
Qubes OS does a much better job though, because it relies on security through compartmentalization, not security through correctness.
The problem with that is it runs on a desktop, which means very little in the way of protection against physical attacks. You might be safe from Mossad trying to hack you from half way across the world, but you're not safe from someone doing an evil maid attack, or from seizing it and bruteforcing the FDE password (assuming you didn't set a 20 random character password).
If someone puts passwords shorter than 30 characters on their devices, then everything that happens to them is their own fault.
TPM with Heads protects my laptop from such attacks just fine. All based on FLOSS.
> assuming you didn't set a 20 random character password
It doesn't have to be all random characters for good protection.
This is a newly-discovered vulnerability (CVE-2026-20700, addressed along with CVE-2025-14174 and CVE-2025-43529).
Note that the description "an attacker with memory write capability may be able to execute arbitrary code" implies that this CVE is a step in a complex exploit chain. In other words, it's not a "grab a locked iPhone and bypass the passcode" vulnerability.
I may well be missing something, but this reads to me as code execution on user action, not lock bypass.
Like, you couldn’t get a locked phone that hadn’t already been compromised to do anything because it would be locked so you’d have no way to run the code that triggers the compromise.
Am I not interpreting things correctly?
[edit: ah, I guess “An attacker with memory write capability” might cover attackers with physical access to the device and external hardware attached to its circuit board that can write to the memory directly?]
No your original analysis is fine
Crafted by Rajat
Source Code