hckrnws
"Google Chromium CSS contains a use-after-free vulnerability that could allow a remote attacker to potentially exploit heap corruption via a crafted HTML page. This vulnerability could affect multiple web browsers that utilize Chromium, including, but not limited to, Google Chrome, Microsoft Edge, and Opera."
That's pretty bad! I wonder what kind of bounty went to the researcher.
> That's pretty bad! I wonder what kind of bounty went to the researcher.
I'd be surprised if it's above 20K$.
Bug bounties rewards are usually criminally low; doubly so when you consider the efforts usually involved in not only finding serious vulns, but demonstrating a reliable way to exploit them.
Here is a comment that really helped me understand bug bounty payouts: https://news.ycombinator.com/item?id=43025038
Everyone should read this comment, it does a really eloquent job explaining the situation.
The fundamental thing to understand is this: The things you hear about that people make $500k for on the gray market and the things that you see people make $20k for in a bounty program are completely different deliverables, even if the root cause bug turns out to be the same.
Quoted gray market prices are generally for working exploit chains, which require increasingly complex and valuable mitigation bypasses which work in tandem with the initial access exploit; for example, for this exploit to be particularly useful, it needs a sandbox escape.
Developing a vulnerability into a full chain requires a huge amount of risk - not weird crimey bitcoin in a back alley risk like people in this thread seem to want to imagine, but simple time-value risk. While one party is spending hundreds of hours and burning several additional exploits in the course of making a reliable and difficult-to-detect chain out of this vulnerability, fifty people are changing their fuzzer settings and sending hundreds of bugs in for bounty payout. If they hit the same bug and win their $20k, the party gambling on the $200k full chain is back to square one.
Vulnerability research for bug bounty and full-chain exploit development are effectively different fields, with dramatically different research styles and economics. The fact that they intersect sometimes doesn't mean that it makes sense to compare pricing.
Why is it the USA doesn't have their own bug bounty program for non-DOD systems? Like, sure, they have a bounty for vulns in govt systems. But why not accept vulns for any system, and offer to pay more than anyone else? It would give them a competitive advantage (offensive & defensive) over every other nation. End one experimental weapons program (or whatever garbage DOD spends its obscene budget on) and suddenly we're not cyber-sucky anymore.
I think you are confusing bug bounty programs with espionage and cyber warfare. The USA definitely accepts vulnerabilities for any system (or at least target systems), paying good money for them if it is an attack chain, giving them that competitive edge you mention. They have at least one military organization over this exact thing (USCYBERCOM) and realistically other orgs to include the intelligence community. There are no bug bounties on "any" system because bug bounties are part of programs to fix bugs, not exploit them. They therefore have bug bounties for their own systems, as those are the ones they would be interested in improving. What you described, which they definitely do, is cyber espionage, and those bugs are submitted through different channels than a bug bounty.
But that's the thing, I think they specifically need a non-IC program. If I'm a white-hat, grey-hat, or a somewhat cagey black-hat, I'm not gonna reach out to a shadowy organization with a penchant for extrajudicial surveillance, torture & killing to make $50k on a bug. Sure, you can try your hand at selling them an exploit that won't get revealed. But if only you and The Company know about the bug, and it could mean the upside in a potential war (or just a feather in an agency head's cap), why would The Company keep you alive and able to talk about it? OTOH, if the program you're reporting to doesn't have a track record of illegal activity, personally I'd feel a lot safer reporting there. And ideally their mission would be to patch the bug and not hold onto it. But we get to patch first, so it's still our advantage.
Because collecting and gatekeeping vulns so you can attack other countries is bad manners. If you look up some of the Snowden testimonies, it's implied USA at least had access to some 0-days at the past, but nobody admitted to it, because it just bad national politics.
Even if USA is doing dog-shit in politics now, openly admitting to collecting cyber-weapons (instead of doing it silently) is just an open invitation to condemnation
See the equation group saga https://en.wikipedia.org/wiki/Equation_Group
From being in the trenches a couple of decades ago, they do. They just don't disclose after they pay the bounty. They keep them to themselves. I knew one guy (~2010?) making good money just selling exploits (to a 3-letter agency) that disabled the tally lamps on webcams so the cams could be enabled without alerting the subject.
[dead]
Just go with XMR
This underestimates the adaptability of threat actors. Massive cryptocurrency thefts from individuals have created a market for a rather wide range of server-side bugs.
Got a Gmail ATO? Just run it against some of the leaked cryptocurrency exchange databases, automatically scan for wallet backups and earn hundreds of millions within minutes.
People are paying tens of thousands for “bugs” that allow them to confirm if an email address is registered on a platform.
Even trust isn’t much of a problem anymore, well-known escrow services are everywhere.
The bounty could be very high. Last year one bug’s reporter was rewarded $250k. https://news.ycombinator.com/item?id=44861106
Maybe google is an exception (but then again, maybe that payout was part marketing to draw more researchers).
So is there anything that would actually satisfy crowd here?
Offer $25K and it is "How dare a trillion dollar company pay so little?"
Offer $250K and it is "Hmm. Exception! Must be marketing!"
What precisely is an acceptable number?
One is a lament that the industry average is so low, and the other is… a lament that the industry average is so low. What's the problem?
An increase in the average bug payout. Bounty programs pay low on average.
A number better than what the exploit could be sold for on the black market
I don't believe those numbers will ever come close to converging, let alone bounty prices surpassing black market prices.
It seems like these vulnerabilities will always be more valuable to people who can guarantee that their use will generate a return than to people who will use them to prevent a theoretical loss.
Beyond that, selling zero-days is a seller's market where sellers can set prices and court many buyers, but bug bounties are a buyer's market where there is only one buyer and pricing is opaque and dictated by the buyer.
So why would anyone ever take a bounty instead of selling on the black market? Risk! You might get arrested or scammed selling an exploit on the black market, black market buyers know that, so they price it in to offers.
Even though I agree with the conclusion with respect to pricing, I don't think this comment is generally accurate.
Most* valuable exploits can be sold on the gray market - not via some bootleg forum with cryptocurrency scammers or in a shadowy back alley for a briefcase full of cash, but for a simple, taxed, legal consulting fee to a forensics or spyware vendor or a government agency in a vendor shaped trenchcoat, just like any other software consulting income.
The risk isn't arrest or scam, it's investment and time-value risk. Getting a bug bounty only requires (generally) that a bug can pass for real; get a crash dump with your magic value in a good looking place, submit, and you're done.
Selling an exploit chain on the gray market generally requires that the exploit chain be reliable, useful, and difficult to detect. This is orders of magnitude more difficult and is extremely high-risk work not because of some "shady" reason, but because there's a nonzero chance that the bug doesn't actually become useful or the vendor patches it before payout.
The things you see people make $500k for on the gray market and the things you see people make $20k for in a bounty program are completely different deliverables even if the root cause / CVE turns out to be the same.
*: For some definition of most, obviously there is an extant "true" crappy cryptocurrency forum black market for exploits but it's not very lucrative or high-skill compared to the "gray market;" these places are a dumping ground for exploits which are useful only for crime and/or for people who have difficulty doing even mildly legitimate business (widely sanctioned, off the grid due to personal history, etc etc.)
I see that someone linked an old tptacek comment about this topic which per the usual explains things more eloquently, so I'll link it again here too: https://news.ycombinator.com/item?id=43025038
> So why would anyone ever take a bounty instead of selling on the black market? Risk!
I like to believe there are also ethics involved in most cases
Systems that rely on ethical behaviour to function generally dont last long
That is why I said "also", it should not be the only factor.
The conversation was moving between two possibilities only: either collect bug bounties or sell on the black market. I believe most (again: most, not all) security researchers collecting bug bounties right now would not start selling on the black market in case bounties disappeared. They would change their focus to something else to sustain themselves
The market is priced at the point that the most economic for the business. Apple buying an exploit for $100m is not worth it (to apple) vs the potential loss of life of people who might be killed if sold on the black market. Buying an exploit for 1m prevents them being used to jailbreak, is good PR, and is ass covering PR insurance in case an Apple exploit cause loss of life (‘the seller could have sold to us, but instead they sold it to an evil corporation’).
Not sure why you're getting downvoted. It's the unfortunate reality.
You can work your day job and make $20-500k/yr or pursue drug dealing and make $5-5000k/yr. I don’t think that’s actually a compelling argument for the latter even if the opportunity cost is better.
Drugs are illegal, exploits are not illegal. Selling them to someone associated with illegal activity is probably illegal, but there is a legitimate fully legal exploit market with buyers like intelligence agencies, and an illegal market with buyers that run oppressive regimes and commit genocide.
I think a big part of "criminally low" is that you'll make much more money selling it on the black market than getting the bounty.
I read this often, and I guess it could be true, but those kinds of transaction would presumably go through DNM / forums like BF and the like. Which means crypto, and full anonymity. So either the buyer trusts the seller to deliver, or the seller trusts the buyer to pay. And once you reveal the particulars of a flaw, nothing prevents the buyer from running away (this actually also occurs regularly on legal, genuine bug bounty programs - they'll patch the problem discreetly after reading the report but never follow up, never mind paying; with little recourse for the researcher).
Even revealing enough details, but not everything, about the flaw to convince a potential buyer would be detrimental to the seller, as the level of details required to convince would likely massively simplify the work of the buyer should they decide to try and find the flaw themselves instead of buying. And I imagine much of those potential buyers would be state actors or organized criminal groups, both of which do have researchers in house.
The way this trust issue is (mostly) solved in drugs DNM is through the platform itself acting as a escrow agent; but I suspect such a thing would not work as well with selling vulnerabilities, because the volume is much lower, for one thing (preventing a high enough volume for reputation building); the financial amounts generally higher, for another.
The real money to be made as a criminal alternative, I think, would be to exploit the flaw yourself on real life targets. For example to drop ransomware payloads; these days ransomware groups even offer franchises - they'll take, say, 15% of the ransom cut and provide assistance with laundering/exploiting the target/etc; and claim your infection in the name of their group.
I don't think you know anything about how these industries work and should probably read some of the published books about them, like "This Is How They Tell Me The World Ends", instead of speculating in a way that will mislead people. Most purchasers of browser exploits are nation-state groups ("gray market") who are heavily incentivized not to screw the seller and would just wire some money directly, not black market sales.
I mean, you're still restricted to selling it to your own government, otherwise getting wired a cool $250k directly would raise a few red flags I think. And how many security researchers have a contact in some government-sponsored hacking company anyway? Do you really think that convincing them to buy a supposed zero-day exploit as a one-off would be easy?
Say you're in the US. I'm sure there are some CIA teams or whatever making use of Chromium exploits "off the record", but for any official business the government would just put pressure on Google directly to get what they want. So any project making use of your zero-day would be so secret that it'd be virtually impossible for you to even get in contact with anybody interested to buy it. Sure they might not try to "screw you", but it's sort of like going to the CIA and saying, "Hey would you be interested in buying this cache of illegal guns? Perhaps you could use it to arm Cuban rebels". What do you think they would respond to that?
Defence firms like Raytheon are often happy to pay for stuff like this. What happens afterwards with the exploit is anybody's guess. Source - a vague memory of a Darknet diaries episode.
There are intermediate firms that will get the exploits passed to the right people. They are not very difficult to find.
Eh, not really? If it's a legit company who provides services to various governments, they're going to pay you, they're going to report the income to the government, you'll get a 1099 for contract/consulting, and you'll pay your taxes on the legit income. No red flags. Assuming they're legit and not currently sanctioned by the US government that is.
> Even revealing enough details, but not everything, about the flaw to convince a potential buyer would be detrimental to the seller, as the level of details required to convince would likely massively simplify the work of the buyer should they decide to try and find the flaw themselves instead of buying.
Is conning a seller really worth it for a potential buyer? Details will help an expert find the flaw, but it still takes lots of work, and there is the risk of not finding it (and the seller will be careful next time).
> And I imagine much of those potential buyers would be state actors or organized criminal groups, both of which do have researchers in house.
They also have the money to just buy an exploit.
> The real money to be made as a criminal alternative, I think, would be to exploit the flaw yourself on real life targets. For example to drop ransomware payloads; these days ransomware groups even offer franchises - they'll take, say, 15% of the ransom cut and provide assistance with laundering/exploiting the target/etc; and claim your infection in the name of their group.
I'd imagine the skills needed to get paid from ransomware victims without getting caught to be very different from the skills needed to find a vulnerability.
I am far from the halls of corporate decision making, but I really don't understand why bug bounties at trillion dollar companies are so low.
Because it's nice to get $10k legally + public credit than it is to get $100k while risking arrest + prison time, getting scammed, or selling your exploit to someone that uses it to ransom a children's hospital?
Is it in fact illegal to sell a zero day exploit of an open source application or library to whoever I want?
Depends. Within the US, there are data export laws that could make the "whoever" part illegal. There are also conspiracy to commit a crime laws that could imply liability. There are also laws that could make performing/demonstrating certain exploits illegal, even if divulging it isn't. That could result in some legal gray area. IANAL but have worked in this domain. Obviously different jurisdictions may handle such issues differently from one another.
Thanks, great answer. I was just thinking from a simple market value POV.
What about $500K selling it to governments?
Issue 1: Governments which your own gov't likes, or ones which it doesn't? The latter has downsides similar to a black market sale.
Issue 2: Selling to governments generally means selling to a Creepy-Spooky Agency. Sadly, creeps & spooks can "get ideas" about their $500k also buying them rights to your future work.
> but demonstrating a reliable way to exploit them
Is this a requirement for most bug bounty programs? Particularly the “reliable” bit?
This depends on the program.
So basically Firefox is not affected ?
The listed browsers are basically skins on top of the same chromium base.
It’s why Firefox and Safari as so important despite HN’a wish they’d go away.
HN doesn't want firefox to go away. HN wants firefox to be better, more privacy/security focused, and to stop trying to copy chrome out of the misguided hope that being a poor imitation will somehow make it more popular.
Sadly, mozilla is now an adtech company (https://www.adexchanger.com/privacy/mozilla-acquires-anonym-...) and by default firefox now collects your data to sell to advertisers. We can expect less and less privacy for firefox users as Mozilla is now fully committed to trying to profit from the sale of firefox users personal data to advertisers.
As a 25 year Firefox user this is spot on. I held out for 5 years hoping they would figure something out, but all they did was release weird stuff like VPNs and half baked services with a layer of "privacy" nail polish.
Brave is an example of a company doing some of the same things, but actually succeeding it appears. They have some kind of VPN thing, but also have Tor tabs for some other use cases.
They have some kind of integration with crypto wallets I have used a few times, but I'm sure Firefox has a reason they can't do that or would mess it up.
You can only watch Mozilla make so many mistakes while you suffer a worse Internet experience. The sad part is that we are paying the price now. All of the companies that can benefit from the Chrome lock in are doing so. The web extensions are neutered - and more is coming - and the reasons are exactly what you would expect: more ads and weird user hostile features like "you must keep this window in the foreground" that attempt to extract a "premium" experience from basic usage.
Mozilla failed and now the best we have is Brave. Soon the fingerprinting will be good enough Firefox will be akin to running a Tor browser with a CAPTCHA verification can for every page load.
What would be an acceptable revenue model? Google Chrome has the same privacy profile with the exception that Google retains the data for their own ad platforms.
Selling preferential search access is legally precarious due to FTC's lawsuit against Mozilla.
> What would be an acceptable revenue model?
They could start with the one they've refused for ages even though many have asked for it. Let people directly donate to fund the development of firefox (as opposed to just giving mozilla money to funnel into any number of their other projects). They could even make money selling merch if they didn't tank the brand. Firefox could have a very nice niche to fill as a privacy focused browser for power users who desire customization and security, but sadly they don't seem interested in being that. For whatever reason they'd rather spend a fortune buying adtech from facebook employees and be a chrome clone that pushes ads and sells user data, and that isn't going to inspire support from users.
That said, I'm not convinced that every open source project needs to be profit generating. Many projects are hugely successful without resorting to ads. What makes it possible for VLC or even Arch Linux to thrive without advertising that couldn't work just as well for firefox? The solution is certainly not to turn Firefox into a project that their users no longer want to support or use at all, but that seems to be where they are headed by selling out their userbase.
Well said. Do you know of any recent reports or if anyone has actually gone through the funding calculations regarding the funding model you described (let’s call it “FF-direct”) versus Mozilla’s status quo funding model?
Primary questions are: How much does FF cost to sustain? How much is spent on new performance, functionality and feature development? What number does Firefox need to compete directly with Chrome? If you asked an experienced FF project contributor what is the delta between the previous two questions?
- a 20+ year Firefox power user very familiar with the FF project, web browsers, and how they compete
I haven't seen those kinds of numbers, but I agree they'd be good to have. I know that firefox makes a massive amount of money from Google (last I heard they made something like 400 million a year) and firefox was bringing in 90% of Mozilla's total income which means that the money firefox beings in isn't just going into firefox, but is holding up everything mozilla does. Even if a donation model was sufficient to support the browser, mozilla may not be happy about losing almost everything else they have going.
Looking around I find https://stateof.mozilla.org/ledger and https://assets.mozilla.net/annualreport/2024/mozilla-fdn-202... which might help answer some of those questions.
As for competing with chrome, I don't think they need to. Most people's only computer these days is an android phone and chrome is always going to be a first class citizen there. We saw the same thing with IE when windows was the operating system most people used.
It's perfectly fine for Chrome to be the default browser for the common people leaving firefox to be the preferred choice of the computer savvy. Firefox could slowly gain an audience as people start to become more aware of how chrome violates their privacy or as they seek relief from the worsening cesspool of ads chrome is encouraging the internet to become, but firefox never has to be number 1 or anywhere close to that in order to be successful and valued.
The biggest problem is a failure of trust. I won't donate to the Mozilla foundation because I have zero faith in them using that money wisely.
Wait the FTC is suing Mozilla?
HN wants Firefox but with better stewardship and fewer misdirected funds.
Mozilla - wrongly - believes that the majority of FF users believe in Mozilla's hobby projects rather than that they care about their browser.
That's why - as far as I know - to this day it is impossible to directly fund Firefox. They'd rather take money from google than to be focusing on the one thing that matters.
We have no idea what is in that contract with Google. They get to be the default search engine, but what else? Does it prevent Firefox from accepting some sources of funding, like donations?
It would be great to get transparency on this…
Do you mean Firefox specifically? Because you can donate to Mozilla: https://www.mozillafoundation.org/en/donate/ it's that you can't specify where you want the funds to go.
yes, I do mean Firefox specifically. Mozilla fundation is not Mozilla corporation. The money you give to the fundation is for their charity work, none of that goes to the development of Firefox.
I am pretty sure that the issue is that they either admit to being so l stuck as a vassal beholden to Google, or they pretend to be enterprising and forward looking with many promising projects
I just want Firefox's search box to be on the top of the window so I don't have to bend my neck when I'm surfing in bed... I don't use it just for that.
If you're talking about url/search bar at the bottom on mobile, that's customisable - actually they ask you which you prefer when you install it, but you can change it at any time in settings. (personally I prefer all that stuff at the bottom since it's more conveniently where all my other phone nav is, and visibility fits in well with how I scroll)
I don't think that Mozilla believes that their pet projects are what the use community wants. I think they just don't care. Google's check will clear next year anyways.
That's probably true.
HN, and firefox users, can never decide where the money should go or what the goals should be. The problem with producing the better product is the amount of in-fighting increases exponentially. Google produces a "fuck you got mine" type browser and everyone knows it, so nobody really cares when they make god awful privacy decisions or intentionally produce worst standards to try to fuck their customers up the ass in new and exciting ways.
When Firefox introduces a new feature, half the people complain it's stupid and worthless while the other half complain it's not enough. And, when it inevitably gets axed, it magically turns out actually it was beloved the whole time and oh no my Grandma used Pocket as life support and now she can't breath.
When Firefox implements new web standards half the people complain that they're bending to Google's whim and that these standards are stupid. We don't want them, just focus on performance and what people really care about! ... While the other half complains that it took so long, and in the meantime they switched to a real browser, like Chrome.
Of course, Safari is even further behind Firefox in standards and frankly it's not even close, but does anyone care? Of course not. Apple is another "fuck you got mine" type company. People love that.
And it doesn't just end at Firefox. Oh, no. Firefox OS? Depending on who you ask it's either the biggest missed opportunity ever or one of Mozilla's worst money burning schemes. It's Schrödinger's software - in a parallel universe where it took it off everyone would've always wanted it, and in the current universe nobody ever wanted it.
The biggest mistake Mozilla made was extending any kind of goodwill to their customer base. Clearly, that doesn't work and people do not like it. Let's all stop fucking around and be real for a second - nobody, and I do mean nobody, is switching to Google Chrome because Mozilla made some mistake. They're not, because the reality is that Firefox is truly irreplaceable and ahead of Chrome in so many aspects. They're switching to Chrome because they just don't care about being fucked up the ass, or worse, they secretly want to be.
> HN, and firefox users, can never decide where the money should go or what the goals should be.
Without ever having dealt with this problem, it sounds like an embarrassingly solved problem, in the sense of: He who gives the money, decides where it goes.
The other half is to provide features that are actually detrimental if you don't want them as plug-ins / extensions / whatever. Pocket is an example for this. Firefox OS is not because it's not force-bundled with Firefox to begin with.
> They're switching to Chrome because they just don't care about being fucked up the ass, or worse, they secretly want to be.
The point where you stop trying to understand your users is the point where you start losing them.
Particularly weird impulse for technically inclined people…
Although I must admit to the guilty pleasure of gleefully using Chromium-only features in internal apps where users are guaranteed to run Edge.
Firefox is safe from this because their CSS handling was the first thing they rewrote in Rust.
Does the Rust implementation not use any unsafe and does not use libraries using unsafe?
No. What would be the point of that?
Not Firefox, but Servo has quite a lot of unsafe, even though some of the results are false positives.
https://grep.app/search?f.repo=servo%2Fservo&f.repo.pattern=...
So Servo at the very least cannot be said to be 'safe'. And I believe the Rust code in Firefox is similar.
I mean, even if it was written in c or c++, its unlikely two separate code bases would have the exact same use after feee vuln.
It's unlikely, but it does actually happen. I've seen more than one complete rewrite of something important that had exactly the same bug. And I'm very sure that those sources were not related somehow.
Firefox and Safari are fine in this case, yeah.
No, though Firefox has its own CVE this week: https://thecyberexpress.com/firefox-v147-cve-2026-2447/
It's pretty hard to have an accidental a use after free in the FireFox CSS engine because it is mostly safe Rust. It's possible, but very unlikely.
That came to my mind as well. CSS was one of the earliest major applications of Rust in FireFox. I believe that work was when the "Fearless Concurrency" slogan was popularized.
Yup. To this day, Firefox remains the only browser with a *parallel* CSS engine. Chromium and WebKit teams have considered this and decided not to pursue since it's really easy to get concurrency wrong.
If I recall correctly, the CSS engine was originally developed for Servo and later embedded into Firefox.
Firefox and Safari developers dared the Chromium team to implement :has() and Houdini and this is the result!
/s
Yes, because nobody uses it
Presumably this affects all electron apps which embed chrome too? Don’t they pin the chrome version?
Yes, but it's only a vulnerability if the app allows rendering untrusted HTML or visiting untrusted websites, which most Electron apps don't.
Lots of apps like slack and discord will show you an opengraph preview of a website if you post a link. I could of course be wrong but expect you could craft an exploit that just required you to be able to post the link - then it it would render the preview and trigger the problem.
Secondly as a sibling pointed out lots of apps have html ads so if you show a malicious ad it could also trigger. I’m old enough to remember the early google ads which which google made text-only specifically because google said that ads were a possible vector for malware. Oh how the turns have tabled.
pretty sure I've had slack show me whole web pages without kicking me out to the mobile browser.
Except: Spotify (through ads), Microsoft Teams (through teams apps), Notion (through user embedded iframes), Obsidian (through user embedded iframes), VSCode (through extensions), etc...
It would also require a sandbox escape to be a meaningful vulnerability.
Unfortunately, "seen in the wild" likely means that they _also_ had a sandbox escape, which likely isn't revealed publicly because it's not a vulnerability in properly running execution (i.e., if the heap were not already corrupted, no vulnerability exists).
I'd bet that the sandbox escape is just in the underlying operating system kernel and therefor isn't a matter for Chromium to issue a CVE.
Yeah, but lets keeping downplaying use-after-free as something not worth eliminating in 21st century systems languages.
https://materialize.com/blog/rust-concurrency-bug-unbounded-...
Edit: Replying to ghusbands:
'unsafe' is a core part of Rust itself, not a separate language. And it occurs often in some types of Rust projects or their dependencies. For instance, to avoid bounds checking and not rely on compiler optimizations, some Rust projects use vec::get_unchecked, which is unsafe. One occurrence in code is here:
https://grep.app/pola-rs/polars/main/crates/polars-io/src/cs...
And there are other reasons than performance to use unsafe, like FFI.
Edit2: ghusbands had a different reply when I wrote the above reply, but edited it since.
Edit3: Ycombinator prevents posting relatively many new comments in a short time span. And ghusbands is also wrong about his answer not being edited without him making that clear.
Those kind of arguments is like posting news about people still dying while wearing seat belts and helmets, ignoring the lifes that were saved by having them on.
By the way, I am having these kind of arguments since Object Pascal, back when using languages safer than C was called straighjacket programming.
Ironically, most C wannabe replacements are Object Pascal/Modula-2 like in the safety they offer, except we know better 40 years later for the use cases they still had no answer for.
People made similar arguments regarding C++ versus Ada. The US military and defense industry even got something like a mandate in the 1990s to only write in Ada.
And then there was https://en.wikipedia.org/wiki/Ariane_flight_V88 , where US$370 million was lost. The code was written in Ada.
And using seat belts and wearing helmets do not help in those cases where 'unsafe' is used to take the seat belts and helmets off. And that is needed in Rust in a number of types of cases, such as some types of performance-sensitive code.
Yes, people like to point out Ariane explosion, without going into the details, and missing out on F-35 budget explosion much worse, with ridiculous failures like having to reboot its avionics in flight.
It is like bringing the news of that lucky soul, that only survived a car crash, because it was thrown out of the car, managed to land in such a way that it survived the crash, survival statistics be dammed.
Wasn't the F-35 budget "explosion", or overruns, caused in general by mismanagement? But I will not argue that C++ is perfect. Instead, the ttps://en.wikipedia.org/wiki/Ariane_flight_V88 , where US$370 million was lost, with code written in Ada, is an example where Ada was presented as a safer language and even mandated in the military industry, but where it turned out less well in practice. Even proclaimed "safer" languages can have catastrophic failures, and one can suspect that they might even be less safe in practice, especially if they need mandates to be picked. Instead of Ada companies or other organizations lobbying to force industry to use their language, maybe it is better if there is free competition, and then the onus is on the software development companies to deliver high quality. Ada has improved since the 1990s, perhaps because it has been forced to compete fairly with C, C++ and other languages. Following that thinking, increased, not decreased, competition should be encouraged.
Your lucky soul analogy argument doesn't make any sense.
Yes, once you use 'unsafe' to bypass the safety model, you don't get safety.
Edit: If you reply with a reply, rather than edits, you don't get such confusion.
Comment was deleted :(
I love rust but honestly I am more scared about supply chain attacks through cargo than memory corruption bugs. The reason being that supply chain attacks are probably way cheaper to pull off than finding these bugs
But this is irrelevant. If you're afraid of third-party code, you can just... choose not to use third-party code? Meanwhile, if I'm afraid of memory corruption in C, I cannot just choose not to have memory corruption; I must instead simply choose not to use C. Meanwhile, Chromium uses tons of third-party Rust code, and has thereby judged the risk differently.
Maybe it's more complicated than that? With allocate/delete discipline, C can be fairly safe memory-wise (written a million lines of code in C). But automated package managers etc can bring in code under the covers, and you end up with something you didn't ask for. By that point of view, we reverse the conclusion.
>can be fairly safe memory-wise (written a million lines of code in C)
We are currently in a thread, where a major application has a heap corruption error in its CSS parser, and it's not even rare for such errors to occur. This doesn't seem true.
>But automated package managers etc can bring in code under the covers, and you end up with something you didn't ask for.
Last year there was a backdoor inserted into xz that was only caught because someone thought their CPU usage a little too high. I don't think the whole "C is safer because people don't use dependencies" is actually sound.
yes, people often invoke "simply write safer c" but that doesn't make it any more realistic of a proposition in aggregate as we keep seeing.
Yet so many language features that 'help' with this issue, end up not helping. Null pointers are endemic in Java, as well as leaks. Heap fragmentation becomes difficult to address when the language hides it under layers of helpful abstraction.
In the end, discipline of some kind is needed. C is no different.
>With allocate/delete discipline, C can be fairly safe memory-wise (written a million lines of code in C)
The last 40-50 years have conclusively shown us that relying on the programmer to be disciplined, yourself included, does not work.
I'm sympathetic to the supply chain problem I even wrote a whole thing on it https://vincents.dev/blog/rust-dependencies-scare-me/
That being said as many above have pointed out you can choose not to bring in dependencies. The Chrome team already does this with the font parser library they limit dependencies to 1 or 2 trusted ones with little to no transitive dependencies. Let's not pretend C / C++ is immune to this we had the xz vuln not too long ago. C / C++ has the benefit of the culture not using as many dependencies but this is still a problem that exists. With the increase of code in the world due to ai this is a problem we're going to need to fix sooner rather than later.
I don't think the supply chain should be a blocker for using rust especially when once of the best C++ teams in the world with good funding struggles to always write perfect code. The chrome team has shown precedent for moving to rust safely and avoiding dependency hell, they'll just need to do it again.
They have hundreds of engineers many of which are very gifted, hell they can write their own dependencies!
Yeah I am not saying don't use rust. But the average amount of dependencies used by a dependency makes a big difference in my opinion. The reality is, most people will use wast amounts of dependencies - especially in vibe coded environments, where LLMs try to save a few tokens.
The problem exists in C/C++ too, but the depth of dependencies are much smaller though, making the attack surface smaller, and damage gets spread to fewer products.
If I personally had to choose between a product written in C without dependencies to run on openbsd versus the same product written in rust with a few dependencies I would probably choose the C implementation. Even if there is a memory bug, if the underlying system is right they are extremely difficult/expensive to exploit. Abusing a supply chain on the other hand is very easy
But the thing is these DO get exploited in the wild we see that again and again in high value targets like operating systems. That's why apple and google go to such high extremes to work in things like bounds checking. ROP JOB chains have gotten good and LLMS are even able to help these days (if you have the bankroll)
It's a culture problem and I still have hope we can change that. My big hope is that as more big players get into it, windows, linux, android, chome, we'll get high quality stand alone packages. Many of these products have to reach certain standards. We saw this recently with JPEGXL. It got accepted into chromium and they've been diligent as to not bring in additional external dependencies.
Projects like sudo-rs take the same approach. As always good engineers will make good code as more of a niche for rust gets carved out I belive we'll see an ecosystem more like c / cpp and less like nodejs (of course this is just my sepeculation)
> But the thing is these DO get exploited in the wild we see that again and again in high value targets like operating systems.
Yes but so do supply chain attacks. I mean we both know there's never a way to be absolutely secure and it's all just about probability. The question is how to determine what product may have better chances. All I am saying is that I personally prioritize fewer dependencies over memory safety.
I like your optimism, which I unfortunately struggle to share. I believe the quality of code will go down, there will be a lot of vibe code, and in general inexperienced people who don't put in the cognitive effort to pay attention to it. As software gets cheaper with AI, it will also become increasingly difficult to find the good things in a sea of slop. A good time for all the security engineers though ;)
right but these differ drastically, one is writing perfect code which is quite difficult the other is opting not to take a dependency. One is much more realistic.
I agree on software quality going down, I'm looking very closely at foundational software being written in rust (mostly in the kernel) and it seems to be okay for now.
The other hope is that maybe one day rust will get a fatter standard lib. I understand the opposition to this but I really want a series of crates tied strongly to the ecosystem and funded and audited by the foundation. I think this is the way they were going with offering the foundation maintainer fund.
Personally I'm thinking about moving my career into embedded to escape the massive dependencies and learn more about how computers really work without all the slop on top.
If you can bring in 3rd party libraries, you can be hit with a supply chain attack. C and C++ aren't immune, it's just harder to pull off due to dependency management being more complex (meaning you'll work with less dependencies naturally).
It's not more complex in C or C++, you just have less of a culture of buying into a whole eco-system. C and C++ play nice with the build system that you bring, rather than that you are forced into a particular way of working.
It's 'just a compiler' (ok, a bit more than that). I don't need to use a particular IDE, a particular build system, a particular package manager or even a particular repository.
That is not to throw shade on those other languages, each to their own, but I just like my tools to stay in their lane.
Just like I have a drawer full of different hammers rather than one hammer with 12 different heads, a screwdriver, a hardware store and a drill attachment. I wouldn't know what to do with it.
You’ll find more quality libraries in C because people don’t care about splitting them down to microscopic parcels. Even something like ‘just’ have tens of deps, including one to check that something is executable.
https://github.com/casey/just/blob/master/Cargo.toml
That’s just asking for trouble down the line.
You also won’t typically find C/C++ developers blinding yolo’ing the latest version of a dependency from the Internet into their CI/CD pipeline.
They’ll stick with a stable version that has the features they need until they have a good reason to move. That version will be one they’ve decided to ship themselves, or it’ll be provided by someone like Debian or Red Hat.
Unless of course they are using vcpkg, conan or FetchContent.
Most corporations are already using the likes of Nexus or JFrog Artifactory, regardless of the programming language.
yes, the average amount of dependencies used per dependency appears to be much larger in rust and thats what I meant and is worrying me. In theory C can be written in a memory safe manner, and in theory rust can be used without large junks of supply vulnerabilities. both of these are not the case in practice though
> both of these are not the case in practice though
No, people routinely write Rust with no third-party dependencies, and yet people do not routinely write C code that is memory-safe. Your threat model needs re-evaluating. Also keep in mind that the most common dependencies (rand, serde, regex, etc) are literally provided by the Rust project itself, and are no more susceptible to supply chain attacks than the compiler.
People also write Rust code that is not memory-safe.
https://materialize.com/blog/rust-concurrency-bug-unbounded-...
The vast majority of Rust code out there doesn't use the `unsafe` keyword at all, and the vastly smaller amount of unsafe code that exists allows for focused and precise testing and verification. You really have no idea what you're talking about if you're trying to say that Rust is anywhere in the ballpark of C or C++ here.
But not "routinely".
How can you be sure? When I looked at for instance sudo-rs, it proclaimed loudly that it is memory safe, but its code has lots of unsafe.
https://github.com/trifectatechfoundation/sudo-rs
https://grep.app/search?f.repo=trifectatechfoundation%2Fsudo...
And Miri is very popular in Rust. Even if a Rust project doesn't have unsafe, sometimes people still run Miri with it, since dependencies might have messed up their unsafe usage.
I know it's a sensitive topic for a lot of people, but as I said, I love rust. I don't know a lot of rust projects though that don't use any dependencies. In my humble opinion, disregarding the risks of such supply chain attacks is at least as bad as people disregarding the risk of memory unsafe code. But keep in mind, I'm not saying don't use rust.
mamma mia! one day anyhow and anyerror will be backdoored it's inevitable
One difference is that it's an incredibly hard problem to check whether your C code is memory safe since every single line of your code is a risk. On the other hand, it's easy to at least assess where your supply vulnerabilities lie (read Cargo.toml), and you can enforce your policy of choice (e.g. whitelist a few specific dependencies only, vendor them, etc).
I would argue that almost all major rust projects use dependencies. Checking the dependencies for vulnerabilities might be just as difficult as checking C code for memory safety, maybe even worse, because dependencies have dependencies and the amount of code to be checked can easily sky rocket. The problem gets even worse if you consider that not all rust code is safe, and that C libraries can be included and so on
Yes, but I believe that results in a cost/benefit analysis. If there are readily available rust crates that do something you need, and the cost of a possible vulnerability is not huge, most projects might decide (right or wrong) that it is worth it. It's an interesting question why projects tend to make different decisions in different languages, but it does not necessarily mean that you have to make the same decisions.
My point is that if you put a very high emphasis on avoiding vulnerabilities, you can either write the code in C with no/limited dependencies (and still risk memory safety bugs), or write the code in Rust with no/limited dependencies and no/limited unsafe code, and get much stronger guarantees for the same/less effort.
Fair, you see the perspective from someone writing the software and it makes sense. But when I see it though the lenses of someone choosing software to run, I would rather choose a C program with potential memory bugs than a rust program with a lot of dependencies - because I am more scared about supply chain attacks than someone being able to exploit a memory bug. But then again, this obviously changes if the rust program has no dependencies.
The statistics we have on real world security exploits proves that most security exploits are not coming from supply chain attacks though.
Memory safety related security exploits happen in a steady stream in basically all non-trivial C projects, but supply chain attacks, while possible, are much more rare.
I'm not saying we shouldn't care about both issues, but the idea is to fix the low hanging fruit and common cases before optimizing for things that aren't in practice that big of a deal.
Also, C is not inherently invulnerable to supply chain attacks either!
Google already uses `cargo-vet` for rust dependencies.
thats good, but it wont eliminate the risk
Nothing eliminates the risk but it is basically a best-in-class solution. If your primary concern is supply chain risk, there you go, best in class defense against it.
If anything, what are you doing about supply chain for the existing code base? How is cargo worse here when cargo-vet exists and is actively maintained by Google, Mozilla, and others?
true, but rusts success in creating an easy to use dependency manager is the curse. In general rust software seems to use a larger amount of dependencies than c/c++ due to that, where each is at risk of becoming an attack vector. my prediction is that we will see some abuse of this in future, similar to what npm experienced
All mainstream package managers are built with zero forethought into security, as far as I can tell. I don't think any of them are any good at it at all, otherwise they wouldn't give arbitrary code execution with literally zero restrictions, ability to audit, etc.
That said, `cargo-vet` is easily the best tool for mitigating this that I am aware of and it exists for Rust and is actively maintained by Google, Mozilla, and many others. I think it's fine to say "Rust encourages using more dependencies" but it has to be acknowledged that Rust also brings with it the best in class tool for supply chain security.
Could it be better? Absolutely. God yes. Why is cargo giving access to `~/.ssh/` for every `build.sh`? Why do package managers not make any effort to sandbox? But that's life today.
"Actually, you forgot Brave."
I quoted directly from NIST, there's many other browsers and non-browsers that use chromium
Steam and VSCode pop into my mind.
It was intended as a joke reference to the 2004 Kerry / Bush debate. It's not a coincidence that Google would leave off an ad-blocking variant of Chrome.
they listed the top 3 most popular chromium browsers, covering 90%+ of chromium users
But not 90% of users here.
did you also take poland being omitted to be some sort of conspiracy? seems you missed the point of why that "Actually, you forgot..." moment became such a punchline. Like it or not Brave is a very niche browser with rather insignificant market share why you would expect them to be mentioned in the first place is entirely lost on me. there are dozens of chromium forks also with under 1% market share, should we be forced to mention them all?
It semeed to me like an obvious telegraph of bias.
I understand the meme very well. What made the Poland meme was that Poland's membership in the coalition was irrelevant to the "grand coalition" narrative--Kerry's omission of Poland is therefore in the same vein as Google's.
If you understand the meme "very well" then what do you mean by "telegraph of bias"? The joke is that Poland was largely irrelevant compared to the United States in that context, making Bush's (and your own) comeback laughable. It's not a conspiracy or "bias" that you don't mention Poland or the other members of the coalition for the same reason you don't mention every single Chromium fork, because realistically its not relevant.
And just to get ahead of it, I sure hope you are not tempted to make an equivalency between a Polish death and not mentioning Brave in a vain effort to resuscitate your position. Because not only would that be extremely misplaced given you provided the clumsy reference in the first place, but Kerry's point in of itself doesn't negate that. You can both understand any life lost is a tragedy while also understand there is no "grand coalition" when the United States shares > 90% of the costs. Just like (even though again, these things should not be compared, but just to indulge the comparison you yourself invoked) maybe Brave or some other under 1% fork does some good things, but that doesn't mean it is relevant to list them for this kind of announcement or any time chromium comes up.
Honestly I have no idea what you're trying to say. Following the allusion to the meme you brought up would be to realize that saying "Actually, you forgot about Brave" is a funny thing to say because its irrelevant and thus a dumb thing to say. It seems you understand there is a joke being made here but perhaps don't realize you're on the wrong side of it.
I wonder how many bugs like this are lurking in the various dark corners of the Chromium/Blink codebase that nobody has taken a good, hard look at in a long time.
Given the staggering importance of the projects they should really have a full-time, well-staffed, well-funded, dedicated team combing through every line, hunting these things down, and fixing them before they have a chance to be used. It'd be a better use of resources than smart fridge integration or whatever other bells and whistles Google has most recently decided to tack onto Chrome.
Chromium is pretty aggressively fuzzed. There aren't a lot of dark corners that can't be reached via a sufficiently aggressive fuzzer.
Not sure about that one. Fuzzers have a hard time creating certain narrow preconditions that a manual review can find.
Google, to their credit, has invested a TON of money into both manual review and also fuzzers. Every major fuzzing project I've read about in the last few years has been at least funded in part by Google.
They’ve gotten way better at this over the last decade with coverage guided execution.
Well, yes and no. For example, coverage-guided fuzzers won't reliably find the taken branch in
if (hash(x) == 0x12345678) {
}
Of course this is contrived, but you can imagine something similar where it requires a delicate setup for that branch to be taken at all, that a human (or these days, an LLM) can find straightforwardly.is Google using LLM-guided fuzzers that can inspect the code first?
Short answer: no. Long answer: There is a lot of Research about fuzzing, and there is a lot of incremental progress. We are not even at half here...
That's true, but isn't Chromium one of the largest and most complicated code bases in history? If you removed the drivers from Linux, which probably 99.9% aren't used in any specific hardware, then Chromium is far more LOC than the Linux kernel core even.
Chromium has so many LOC's of code and so much large in their size that it bypasses github limits.
What I mean by this is that github has a limit (in my understanding) on the sizes of public repos.
Chromium bypasses that & this is the reason why if you fork Chromium, you can get unlimited storage in github iirc
Github gets to start really wonky when you do this tho (iirc)
I can be wrong, I usually am but wanted to share this fact to just share the absurd scale of how large chromium is.
[dead]
"Use after free in CSS" is a funny description to see.
I think they meant something like the CSS parser, or the CSS Object Model (CSSOM).
One of the other commenters wrote a post that said it was related to @font-feature-values
Why ?
To me at least it reads funny because when I think of CSS I think of the language itself and not the accompanying tools that are then running the CSS.
Saying "Markdown has a CVE" would sound equally off. I'm aware that its not actually CSS having the vulnerability but when simplified that's what it sounds like.
Funny you'd mention that, when Notepad had a CVE in it's markdown parsing recently.
Comment was deleted :(
Comment was deleted :(
I don't quite understand the vulnerability, when exploited, you can get information about the page from which the exploit code is running. Without a sandbox escape or XSS, that seems almost completely harmless?
This is the "impact" section on https://github.com/huseyinstif/CVE-2026-2441-PoC:
Arbitrary code execution within the renderer process sandbox Information disclosure — leak V8 heap pointers (ASLR bypass), read renderer memory contents Credential theft — read document.cookie, localStorage, sessionStorage, form input values Session hijacking — steal session tokens, exfiltrate via fetch() / WebSocket / sendBeacon() DOM manipulation — inject phishing forms, modify page content Keylogging — capture all keystrokes via addEventListener('keydown')
Browser exploits are almost always two steps: you exploit a renderer bug in order to get arbitrary code execution inside a sandboxed process, and then you use a second sandbox escape exploit in order to gain arbitrary code execution in the non-sandboxed broker process. The first line of that (almost definitely AI generated) summary is the bad part, and means that this is one half of a full browser compromise chain. The fact that you still need a sandbox escape doesn't mean that it is harmless, especially since if it's being exploited in the wild that means whoever is using it probably does also have a sandbox escape they are pairing with it.
You're spot on about the two-step chain. The scary part of these "in the wild" exploits is that the attackers usually do have that second chain (the sandbox escape) ready to go.
This is partly why we built BrowserBox [0] (an RBI solution). The philosophy is that you assume the renderer will get owned (step 1) and the sandbox will be escaped (step 2). By running that whole process in a disposable Docker container on a remote server, the "sandbox escape" just lands the attacker in an empty, ephemeral container rather than on the user's local OS.
It essentially turns a critical RCE + Sandbox Escape chain into a contained server-side resource exhaustion issue, protecting the actual endpoint data and credentials.
Thanks for the explanation. So much for AI making it easier to learn things!
The fact that these still show up is pretty wild to me. Don't we have a bunch of tools that should create memory-safish binaries by applying the same validation checks that memory-safe languages get for free purely from their design?
I get that css has changed a lot over the years with variables, scopes and adopting things from less/sass/coffee, but people use no-script for the reason because javascript is risky, but what if css can be just as risky... time to also have no-style?
Honestly, pretty excited for the full report since it's either stupid as hell or a multi-step attack chain.
> Don't we have a bunch of tools that should create memory-safish binaries by applying the same validation checks that memory-safe languages get for free purely from their design?
No, we don't. All of the ones we have are heavily leveraged in Chromium or were outright developed at Google for similar projects. 10s of billions are spent to try to get Chromium to not have these vulnerabilities, using those tools. And here we are.
I'll elaborate a bit. Things like sanitizers largely rely on test coverage. Google spends a lot of money on things like fuzzing, but coverage is still a critical requirement. For a massive codebase, gettign proper coverage is obviously really tricky. We'll have to learn more about this vulnerability but you can see how even just that limitation alone is sufficient to explain gaps.
> No, we don't. All of the ones we have are heavily leveraged in Chromium or were outright developed at Google for similar projects. 10s of billions are spent to try to get Chromium to not have these vulnerabilities, using those tools. And here we are.
Chromium is filled with sloppy and old code. Some of the source code (at least if dependencies are included) is more than 20 years old, and a lot of focus has been on performance, not security.
Using Rust does not necessarily solve this. First, performance-sensitive code can require 'unsafe', and unsafe allows for memory unsafety, thus going back to square one, or further back. And second, memory safety isn't the only source of vulnerabilities. Rust's tagged unions and pattern matching help a lot with general program correctness, however, and C++ is lagging behind there.
> Chromium is filled with sloppy and old code. Some of the source code (at least if dependencies are included) is more than 20 years old, and a lot of focus has been on performance, not security.
Chromium is also some of the most highly invested in software with regards to security. Literally entire technologies that we now take for granted (seccomp-ebpf comes to mind) exist to make Chrome safe. Sanitizers were a Google project that Chromium was an aggressive adopter and contributor towards. I could go on.
> Using Rust does not necessarily solve this. First, performance-sensitive code can require 'unsafe', and unsafe allows for memory unsafety, thus going back to square one, or further back.
This isn't really true? I have no idea what "further back" means here. The answer seems to just be "no". Unsafe does allow for memory unsafety but it's hilarious to me when people bring this up tbh. You can literally `grep unsafe` and ensure that your code in that area is safe using all sorts of otherwise insanely expensive means. Fuzz that code, ensure coverage of that code, run `miri`, which is like a sanitizer on steroids, or literally formally verify it. It's ridiculous to compare this to C++ where you have no "grep for the place to start" capability. You go from having to think of 10s of millions of lines of code that holds a state space vastly greater than the number of particles of this universe 100000000x over, to a tiny block.
With the level of investment that Google puts into things like fuzzing, Rust would have absolutely made this bug harder to ship.
> And second, memory safety isn't the only source of vulnerabilities.
It's the source of this one and every ITW Chromium exploit that I can recall off of the top of my head.
Sorry, but you are getting the basics of Rust wrong, and that undermines your whole reply. Please read and understand https://doc.rust-lang.org/nomicon/working-with-unsafe.html before engaging further in topics regarding Rust and unsafe. I also encourage you to spend more time thinking about your arguments, and studying basic logic, since you make several basic errors in reasoning.
Please be more specific about where you think the parent commenter is getting the basics wrong. A single quoted sentence will do.
Sorry, but if he or you read and understood that link I gave (and it is not a long link, and it is interesting in my opinion), it would be quite clear. If he boldly comes with false statements about basics, then he cannot be said to be engaging in good faith debate. Especially considering the errors he makes regarding basic logic.
However, I can at least give you a hint: If you have a block of unsafe, is it sufficient to inspect that block of unsafe code if a change is made to a different part of the code that is not in an unsafe block if one is seeking to avoid memory unsafety? If he has done his basic homework, he should have known the answer to that and the consequences of it.
You can also consider this: Why do some Rust projects which do not contain a single line of unsafe, still run Miri, a tool that is designed to catch some (not all) undefined behavior and memory unsafety, and which can run 50x-200x slower than regular Rust? An example can be seen in https://zackoverflow.dev/writing/unsafe-rust-vs-zig/ , where the author used Miri to find memory unsafety in a dependency, upstreamed a fix to that dependency, and then encountered memory unsafety in a different dependency and gave up.
> If you use a crate in your Rust program, Miri will also panic if that crate has some UB. This sucks because there’s no way to configure it to skip over the crate, so you either have to fork and patch the UB yourself, or raise an issue with the authors of the crates and hopefully they fix it.
> This happened to me once on another project and I waited a day for it to get fixed, then when it was finally fixed I immediately ran into another source of UB from another crate and gave up.
I'll need you to be much more specific. I'm actually quite familiar with Rust, having worked with it since 2015, speaking at the first rustconf, having written in it professionally, having worked on a team that did vulnerability research with a highly hardened Rust codebase[0] in which `unsafe` usage introduced a vulnerability, etc.
If you'd like me to construct a formal syllogism to communicate my points then I might be able to abide. You first though because I find it utterly ridiculous to complain about "logical errors" in what I've written when your post made numerous unsupported or seemingly irrelevant claims.
[0] https://web.archive.org/web/20221001182026/https://www.grapl...
I have serious doubts about your claims in this latest post. If even half of them are true, then I find it severely disappointing and difficult to comprehend that you somehow have such a poor and lacking understanding of unsafe in Rust, yet make bold claims regarding it, as well as the other faults in your previous comment, and that your description of my post is entirely wrong.
So, this is unhelpful and makes me think very little of you. Your other post is better so I'll reply to that. I will note that I almost certainly know more than you, so any sort of "you don't know what you're talking about" will be lost on my deaf ears.
Given my experience, you should question your own position, as, empirically, I have demonstrated expertise on this topic.
The fact that you won't make points but instead simply ask questions, I think, demonstrates how weakly you're able to express your own position. You're the one advocating for reasoned discussion, perhaps you should abandon this weak socratic approach and present your arguments?
> If you have a block of unsafe, is it sufficient to inspect that block of unsafe code if a change is made to a different part of the code that is not in an unsafe block if one is seeking to avoid memory unsafety? If he has done his basic homework, he should have known the answer to that and the consequences of it.
No, and I didn't say this. Memory safety is not a strictly local property and I never claimed it was, that's merely an uncharitable and naive read of my post. I said that `grep unsafe` gives you a starting place, because any memory safety properties (barring unsound compiler issues etc) almost necessarily involve an unsafe block.
> Why do some Rust projects which do not contain a single line of unsafe, still run Miri, a tool that is designed to catch some (not all) undefined behavior and memory unsafety, and which can run 50x-200x slower than regular Rust? An example can be seen in https://zackoverflow.dev/writing/unsafe-rust-vs-zig/ , where the author used Miri to find memory unsafety in a dependency, upstreamed a fix to that dependency, and then encountered memory unsafety in a different dependency and gave up.
This is just changing the definition of "do not contain a single line of unsafe" to mean "excluding dependencies". Who ever said that you shouldn't check for unsafe in your dependencies? Not me!
Anyway, you seem quite dumb. Your inability to properly interpret a position both as you may first see it as well as the charitable interpretation, and then to present that, is a personal failing that you should reflect on. Notably, I didn't even mention Rust in my first post, you are the one who made the initial claims about Rust. I just can not express how deeply ironic it is for you to then make ridiculous claims about "logical" arguments, it's funny to me.
I strongly doubt I have literally anything to learn from discussing this with you but perhaps this post will be helpful for other readers. Certainly I have nothing to learn from you on the topics of Rust and formal reasoning, two areas where I have far more experience than you, I suspect.
> Things like sanitizers largely rely on test coverage.
And not in a trivial “this line is traversed” way, you need to actually trigger the error condition at runtime for a sanitizer to see anything. Which is why I always shake my head at claims that go has “amazing thread safety” because it has the race detector (aka tsan). That’s the opposite of thread safety. It is, if anything, an admission to a lack of it.
I heard they once created an entire language that would replace C++ in all their projects. Obviously they never rewrote Chrome in Go.
> 10s of billions are spent to try to get Chromium to not have these vulnerabilities, using those tools. And here we are.
Shouldn't pages run in isolated and sandboxed processes anyway? If that exploit gets you anywhere it would be a failure of multiple layers.
The ITW exploit has some sort of sandbox escape. My money is on a kernel exploit, but there are other options - universal XSS, IPC, etc. Kernel vuln is most likely by far imo.
Chromium uses probably the single most advanced sandbox out there, at least for software that users are likely to run into.
They do run in a sandbox, and this exploit gives the attacker RCE inside the sandbox. It is not in and of itself a sandbox escape.
However if you have arbitrary code execution then you can groom the heap with malloc/new to create the layout for a heap overflow->ret2libc or something similar
I don't think Go was ever planned to completely overtake C++. It is still a garbage collected language at the end of the day.
I think the parent tries to refer to Carbon: https://en.wikipedia.org/wiki/Carbon_(programming_language)
I actually wasn't aware of that language. It was more a reference to the overblown claims Pike made in the early days of Go, where he presented it as the c++ replacement for everything Google.
Yeah, but that was never Google, rather Rob Pike and his peers that never liked C++.
Note that even though C++ was born as UNIX language, as C sibling, at Bell Labs, Plan 9 and Inferno never supported it.
Here is the blog post from Rob Pike, https://commandcenter.blogspot.com/2012/06/less-is-exponenti...
Many people enjoy playing games, and video productions, written in a garbage collected C++ engine.
Go's main issue is its language design approach.
"No-style" would break the modern web even more than No-Script does, unfortunately. CSS is just too integral to layout and functionality now.
A more robust alternative to disabling features is isolating where they execute. Instead of stripping out CSS/JS and breaking sites, you stream the browser session from a remote server.
If a zero-day like CVE-2026-2441 hits the parser, it crashes (or exploits) the remote instance, not your local machine. You get to keep the rich web experience (CSS, JS, fonts) without trusting your local CPU with the parsing logic. It’s basically "air-gapping" your browser tab. Not perfect, as attackers could still add a third step to compromise your remote and flow back to your local, but isolation like this adds defense in depth that then needs to compromise a much narrower attack surface (the local-remote tunnel) than if you ran it locally.
"Many of our security bugs are detected using AddressSanitizer, MemorySanitizer, UndefinedBehaviorSanitizer, Control Flow Integrity, libFuzzer, or AFL."
Interesting they are listing archived projects and not OSS-Fuzz. What's the reason for this?
I thought OSS-fuzz still uses the aforementioned sanitizers and fuzz engines. It is not by itself a fuzzing engine.
I'd love to see what the PoC code looks like, of course after the patch has been rolled out for a few weeks.
Here's one: https://github.com/huseyinstif/CVE-2026-2441-PoC
Maybe Chromium should also rewrite their rendering engine in Rust ;p
You joke, but a substantial portion of the Blink engine was (re)written in garbage collected C++ to a similar effect.
They could just invest into Servo's and use that.
When I try to look up the CVE/issue I get,
https://issues.chromium.org/issues/483569511 - [TBD][483569511] High CVE-2026-2441: Use after free in CSS. Reported by Shaheen Fazim on 2026-02-11
> Access is denied to this issue. Access to this issue may be resolved by signing in.
Comment was deleted :(
Comment was deleted :(
this is insane! what other zero days are out there and being used
also this seems chromium only so it doesnt impact firefox ?
Yeah, Firefox uses a different CSS engine that doesn't automatically have this same use-after-free.
Devtools is seemingly partially broken in this version, if I have devtools open on a reasonably dynamic web app Chrome will crash within a minute or two
It's also been ridiculously slow for a month or two now :/ not a good time to be working on some relatively intricate performance optimisation with DevTools taking 1-4 seconds to even start the performance recording.
I always wonder how many zero-days exist on purpose…
I've heard this sentiment a lot, that governments/secret agencies/whoever create zero-days intentionally, for their own use.
This is an interesting thought to me (like, how does one create a zero-day that doesn't look intentional?) but the more I think about it, the more I start to believe that this fully is not necessary. There are enough faulty humans and memory unsafe languages in the loop that there will always be a zero-day somewhere, you just need to find it.
(this isn't to say something like the NSA has never created or ordered the creation of a backdoor - I just don't think it would be in the form of an "unintentional" zero-day exploit)
The NSA surely has ordered a backdoor.
>In December 2013, a Reuters news article alleged that in 2004, before NIST standardized Dual_EC_DRBG, NSA paid RSA Security $10 million in a secret deal to use Dual_EC_DRBG as the default in the RSA BSAFE cryptography library https://en.wikipedia.org/wiki/Dual_EC_DRBG
I'm not sure that governments actually create them, not prolifically at least. There's been some state actor influence over the years, for sure.
However, exploits that are known (only) by a state actor would most definitely be a closely guarded secret. It's only convenient for a state to release information about an exploit when either it's been made public or it has more consequences for not releasing.
So yes, exactly what you said. It's easier to find the exploits than to create them yourself. By extrapolation, you would have to assume that each state maintains its set of secret exploits, possibly never getting to use them for fear of the other side knowing of their existence. Cat & Mouse, Spy vs Spy for sure.
I think you are right that the shady actors pretty much can use existing bugs.
But you are also right that this is not the only way they work. With the XZ Utils backdoor (2024), we normal nerds got an interesting glimpse into how they create a zero-day. It was luckily discovered by an american developer not looking for zero-days, just debugging a performance problem.
This doesn't affect the many browsers based on Chromium?
It does, it's just that blog is for chrome so it doesn't mention other browsers.
"This vulnerability could affect multiple web browsers that utilize Chromium, including, but not limited to, Google Chrome, Microsoft Edge, and Opera"
why on earth would you even assume somthing like this?
honestly curious. do you think "based on chrome" means they forked the engine and not just "applied some UI skin"?
The CVE itself only lists Chrome as the "affected software configuration", and I missed the line saying other browsers in the blog post, so I had a slight doubt. Other projects could use a drop-in replacement lib for the CSS, that's something one sees sometimes for other things (e.g. crypto libs - some projects have compile-time options ready for this).
use after free.... ahh the irony
[dead]
[dead]
[flagged]
[dead]
Isn't this a wrongly editorialized title - "Reported by Shaheen Fazim on 2026-02-11" so more like 7-day.
It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug
The term has long watered-down to mean any vulnerability (since it was always a zero-day at some point before the patch release, I guess is those people's logic? idk). Fear inflation and shoehorning seems to happen to any type of scary/scarier/scariest attack term. Might be easiest not to put too much thought into media headlines containing 0day, hacker, crypto, AI, etc. Recently saw non-R RCEs and supply chain attacks not being about anyone's supply chain copied happily onto HN
Edit: fwiw, I'm not the downvoter
It's original meaning was days since software release, without any security connotation attached. It came from the warez scene, where groups competed to crack software and make it available to the scene earlier and earlier. A week after general release, three days, same-day. The ultimate was 0-day software, software which was not yet available to the general public.
In a security context, it has come to mean days since a mitigation was released. Prior to disclosure or mitigation, all vulnerabilities are "0-day", which may be for weeks, months, or years.
It's not really an inflation of the term, just a shifting of context. "Days since software was released" -> "Days since a mitigation for a given vulnerability was released".
Wikipedia: A zero-day (also known as a 0-day) is a vulnerability or security hole in a computer system unknown to its developers or anyone capable of mitigating it
This seems logical since by etymology of zeroday it should apply to the release (=disclosure) of a vuln.
> It refers to your many days software is available for, with zero implying it is not yet out so you couldn't have installed a new version and that's what makes it a risky bug
Zero-day vulnerability or zero-day exploit refer to the vulnerability, not the vulnerable software. Hence by common sense the availability refers to the vulnerability info or the exploit code.
I think the implication in this specific context is that malicious people were exploiting the vuln in the wild prior to the fix being released
I wonder if this was found with LLM assistance, if yes, with which one and is it a one-off or does it mark a start of a new era (I assume it does).
If you haven't seen news related to LLM generated bug reports, they are pretty disliked due to poor quality. So yes, a new LLM generated bug report era has begun, and the results so far have been moderator/developer burnout, increased time between real bugs being taken care of (as devs treat each submission as a true possible bug), and many projects no longer accepting bug reports. I have seen a couple anecdotal incidents when someone used LLMs to generate real bugs, one guy showing off a chain he made to HN, and that was really neat. LLMs aren't unable to make reports, but scammers and vibecoders see the dollar signs that they aren't going to put real effort in trying to get, and submit every response from a prompt similar to "provide me a bug report for [XX package/app]" in hopes that one pays out. The individuals I saw make real bug reports were already developers and were able to test out and iterate with the code the LLM provided, making connections of their own, just like any other person who uses LLMs responsibly instead of outsourcing thinking.
Absolutely nothing in the announcement or other publicly available source implies that, to my knowledge. Might as well speculate if a random passer-by on the street is secretly a martian.
Crafted by Rajat
Source Code