hckrnws
The financial market things are over my head and I don't have a dog in the game, but I think "Nobody is replacing salesforce with their internally vibe coded software" is just false? Both taken literally [0] [1] and as denying the general trend. Just in my company we already replaced WMS software subscription with own solution, and I wouldn't be able to write it fast enough and maintain it by myself without the use of Claude Code. I'd say "Not perfectly or with every edge case handled, but well enough that the CIO reviewing a $500k annual renewal started asking the question “what if we just built this ourselves”" is an accurate description.
[0] https://lovable.dev/blog/how-a-startup-replaced-a-salesforce...
[1] https://seekingalpha.com/news/4144652-klarna-shuts-down-sale...
I didn't think something could be worse than everyone using Salesforce, but everyone using a different, constantly broken, incompatible SF clone that no one understands may be that.
Lol give it 12 months.
Agreed. If anything, it puts downward pressure on pricing. Even if the CIO still buys Salesforce or whatever other tool, they won't be willing to pay as much.
If you don't give me a discount on my salesforce subscription I'll shoot myself in the face with this AI enabled gun?
You don't need AI to shoot yourself in the face; salesforce can do that just fine.
> Commented [1]: what if pee pee was poo poo
Rarely do I read something that starts off with such promise!
Don’t encourage his diaper fetish! [0]
I thought there would be one or two results, perhaps a result of poor reoccurring phrasing. Nope.
>i'm going to change your diaper and burp you
>Carlito is a very good boy Go piss in your diaper you big baby
>He doesn't care. He is a big baby who filled up his diaper with pee pee and poo poo
>you are a big baby and i am going to change your diaper and burp you <
>To be clear I call executives of multi trillion dollar companies scumbags and if you can't deal with that I'm not sure what to do. Burp you? Change your diaper?
>I am going to change your diaper and burp you
>Yeah man it's real authoritarian to say your second name is doodoo. Go change your diaper you big baby
>Yeah because you're a big baby with a big full diaper
>Hello sir this is your uber outside. I have your order from the diaper store
Is that a fetish, or just idiosyncratic and more aggressive way of mocking someone for being weak/immature/"being a baby"?
Those quotes that I could interpret read more as contempt to me than some kind of role play.
Yeah it’s become a mini-meme amongst the AI folk on Bluesky [0]
"Search is currently unavailable when logged out"
He has this thing where he tries to involve people in his ABDL roleplay without consent.
I do enjoy a good Ed Zitron sneer. The fact that the original article moved markets says a lot about the critical thinking skills of stock market traders.
You should look into how he destroyed the small indie MMO Darkfall and gave the game 2/10 without ever playing it, in a Eurogamer review a few years ago. The developers had receipts and could prove that he hadn't played it.
It doesn't have any material effect on this article, but it says something about his ethics.
Its word against word in this situation. The logs prove nothing as they are easily modifiable and the devs had a good reason to do so.
From Wikipedia:
> Darkfall lead developer Tasos Flambouras claims that game server logs show that the Eurogamer reviewer played the game for under three hours, a claim denied by the writer.
Even if we take the lead developer's word for it, what you are describing is simply false.
The original memo: https://www.citriniresearch.com/p/2028gic
HN discussion: https://news.ycombinator.com/item?id=47114579
I'm somewhat halfway through the original memo, and I hate the fact that kernels of truth lie here and there. For example, I used to work as a full-stack developer for about 2 years, and now I got forced into what the memo calls "gig economy" just to pay the rent, because companies slowed down hiring junior developers thanks to... Honestly, I don't really care at this point what I have to thank for.
All I know is, whenever I read testimonies from people whose companies suddenly decided to force LLM usage for productivity to be "AI first", having colleagues opening PR's who are only machine reviewed with implementations they cannot justify themselves outside of "Claude wrote it", makes me burnout just reading them. And it's only going to get worse until it becomes better, but not for the developers.
Honestly, the one thing that I could see justify all the investment companies make for LLM-assisted coding is the full automation of software production. I can only see the current state of things as the "end game" for them, only if they suddenly decide to jack up pricing to tap directly on the corporate budget and not the individual developer's budget.
Offtopic - The success of coding agents must be Ed Zitron's nightmare.
He has been a perpetual bear
I don't think so.
His argument is not "this tech doesn't work", but rather "these businesses aren't economically viable"
And that the smoke and mirrors accounting and perpetual thirst for more billions indicates just how unviable it is
Whilst he does dunk on LLM capabilities, the framing is the business angle - can Anysphere etc. actually form a moat and make a profit?
>His argument has never been "this tech doesn't work", but rather "these businesses aren't economically viable"
Why? because of cost?
Cost, debt, difficulty forming a moat, gap between what the product promises and what it can do, and the difficulty actually raising capital required.
His style is acerbic and (imo) excessive sometimes. But he's also one of a minority of journos actually looking at the numbers and adding them up. Which seems to be a rarity
cost is going down 20x, 30x over the years so he's wrong about this.
That doesn't matter if the free models are as performant in 6 months. I will never personally pay for a model I can have for free. ChatGPT 5 used to be my preferred model as a DMing help tool, now deepseek and LeChat are the one I use, and are better at what OpenAI model use to be better at. And I think the models hit their limit for my usecase, I don't need better one. I never 'reprompt' anymore, and just roll/improvise with what I got.
i find it interesting that in no case do you allow openapi to profit
- if the costs go up then they can't make profits
- if the costs go down then you won't pay for them
It's hard to sell something I can have for free.
The only way for openAI to get my subscription back would be my country making open-weight ai or deepseek illegal. It was worth the price tbh, but they can't compete with free.
Those are very large reductions - can I ask you for a source?
And why is the error bar so large?
https://epoch.ai/data-insights/llm-inference-price-trends
> The rate of decline varies dramatically depending on the performance milestone, ranging from 9x to 900x per year
Disagree. He's cherry picking an extremely limited subset of numbers, based on a weak understanding of the industry and a lack of access to a lot of private data, and taking advantage of vulnerable people.
>taking advantage of vulnerable people
What on earth do you mean by this? Who is getting taken advantage of?
I'm not sure how anyone can respond to that, without asking you to divulge that private data
Well from my point of view. When they talk about gigawatt datacenters, then yes it is economically nonviable. You just need to know the scale of a gigawatt to realize that we need to start building power plants and fortifying the power grid to ship a gigawatt of power to a single location. Until the build out which takes years mind you, it is competing with other consumers of power. Lets take another huge consumer of power like a large steel mills use 100 megawatt. So if that power becomes more expensive because of datacenters, then the price of steel will go up. And if the price of steel goes up it affects a lot of things in the economy.
We are facing a situation that the short term effects are on memory and storage prices going up and lack of jet engines. Long term we wont be able to build actual buildings and ships without financing it with even more debt than today and everyone in the economy is going to service that debt through the price.
but the costs of inference have been going down 20x to 30x over the years. so how can you tell it is nonviable? unless you are saying they are not paying market rate for the inference
So, they still booked up all the ram and ssd in the world and still going to use gigawatts of power. The price of energy production is not going to go down 20x and 30x it just means that they can cram in more inference on the same energy consumption if the cost goes down. But they aren't paying the market rate for inference because everything is subsidized with debt and investors money to scale as fast as possibly. They are flushed with money and that is why they can book up all silicon production.
I have no idea if costs indeed came down 20x-30x.
This claim sounds extremely fancy when AI companies bleed money, and will keep bleeding money in the foreseeable future.
I don't pretend to know the future. Maybe LLMs become economically viable and are the future, maybe not. I don't really care either way, to be frank.
And I use LLMs, btw. I pay for a ChatGPT account, but I find it only moderately useful. I always sort of question myself upon renewal date if it is worth the 20 bucks I spend monthly on it.
In no small part I keep using it to keep myself up to date on the best practices of using them in case it becomes standard.
https://epoch.ai/data-insights/llm-inference-price-trends
Do you have any reason to not believe it? It’s expected for costs to come down
The graph you linked seems to compare different OpenAI models in terms of "price per million tokens".
I am very skeptical of any financial information that comes from OpenAI. I have no idea how truthful those numbers are, or how creatively they can be collected to paint a rosier future for them.
Even if the numbers are truthful, I have no idea how the calculate price there. Is it in terms of cost of compute they rent? Is this cost subsidized or not?
Also, I don't know this "epoch.ai" website, I don't know their stance. The website name itself does not inspire my confidence on their reporting of anything related to AI. "Eat meat, says the butcher" vibes and all.
You can claim that the AI bleeds money because training is expensive, but inference is cheap. So it will only be financially viable when they stop training models? So they would need to stop improving their capabilities entirely for it to make any sense, is that your claim?
Even if I take this claim at face value (and that would take a lot of faith I don't have to give), it doesn't sound as good as you think it does.
>To analyze the decline in LLM prices over time, we focused on the most cost-effective LLMs above a certain performance threshold at each point in time. To identify these models, we iterated through models sorted by release date. In each iteration, we added a model to the set of cheapest models if it had a lower price than all previous models that scored at or above the threshold.
Can you look at the analysis? It will make it clear. I mean its so obvious because GPT 4 costs way more than GPT 5.2-mini but much worse performance.
>Even if the numbers are truthful, I have no idea how the calculate price there. Is it in terms of cost of compute they rent? Is this cost subsidized or not?
Do you think they are subsidising 900x or simply that the costs have gone down?
Overall you have shown what I feel is extreme skepticism in something that is obvious. You can literally run a model in your laptop that matches an older closed model. Costs are obviously going down, I have shown data. Use your own anecdotes and report.
Extreme skepticism in such a way doesn't do any help.
> Overall you have shown what I feel is extreme skepticism in something that is obvious.
I think you show extreme faith in something that is very obscure.
For me to believe in the analysis I would need to trust the numbers that the analysis is based upon. I see no reason why I should trust this. What sort of regulatory body or neutral third party inspects those numbers to ensure they are not a fabrication?
But you can claim I am a hater if it justifies your worldview. Skepticism is sinful for the believer.
>> "The dataset for this insight combines data on large language model (LLM) API prices and benchmark scores from Artificial Analysis and Epoch AI."
I don't know about Epoch AI, but Artificial Analysis shares its methodology: https://artificialanalysis.ai/methodology
Their chart of inference prices split by benchmark intelligence: https://artificialanalysis.ai/trends#efficiency
> For our language model benchmarking, we note that we consider endpoints to be serverless when customers only pay for their usage, not a fixed rate for access to a system. Typically this means that endpoints are priced on a per token basis, often with different prices for input and output tokens.
Okay, correct me if I am wrong, so this is measuring the inference costs for clients of AI services, not the the inference costs that the AI service itself has when they offer the service?
I mean, the other guy's claim is that inference costs had come down 20x-30x. But the analysis, if I understood correctly, is based on how much clients are paying for it, not how much it actually costs.
I can charge you 20x less for a service and have massive losses for it.
It could be that OpenAI is subsidising their models by _fifty times_. Do you really think they are doing that? In some cases the costs went down by 200x. Do you really think OpenAI is subsidising their models by 200??
Its easier to just admit that technological advances helped decrease the cost instead of coming up with more complicated reasons like VC funding, subsidies and so on.
For instance take Deepseek and other opensource models - even they have reduced their costs by a huge margin. What explanation is there for opensource models?
> It could be that OpenAI is subsidising their models by _fifty times_. Do you really think they are doing that?
Possibly. I don't know.
It could be unfeasible to increase prices so much whenever a new model was released.
Any assumption made here is based on vibes. I see no reason to drop my skepticism.
> Its easier to just admit that technological advances helped decrease the cost instead of coming up with more complicated reasons like VC funding, subsidies and so on.
They raised an absurd amount of cash, and still bleed money to an absurd degree.
VCs make money when they exit. OpenAI only needs to "make sense" until an IPO happens. Once private investors have their exit, the markets can be left to handle the resulting dumpster fire.
> For instance take Deepseek and other opensource models - even they have reduced their costs by a huge margin.
Chinese companies are very opaque. I don't pretend to have insight into it.
Is the company behind Deepseek profitable?
> What explanation is there for opensource models?
What opensource models have to do with inference?
Your argument is that training is expensive but inference is cheap (something I see no evidence of). Why would a company give away the expensive part of the work?
>It could be unfeasible to increase prices so much whenever a new model was released.
This means you have no idea what I have been saying. A new model is costlier, but they release mini versions of old models that are way cheaper and compete with older models.
GPT 5 mini is way cheaper than GPT 4 but around the same performance
GPT-5 mini:
Input tokens: ~$0.25 per 1 M
Cached input: ~$0.025 per 1 M
Output tokens: ~$2 per 1 M
-----
GPT-4 (legacy flagship):
Input roughly $2.00 per 1 M
Output roughly $8.00 per 1 M
>Chinese companies are very opaque. I don't pretend to have insight into it.
False. The models are not opaque, you can literally download it and host it yourself. They have also released papers on how they reduced cost in certain areas.
This is literally them documenting the cost-profit ratio theoretical at 500%
https://github.com/deepseek-ai/open-infra-index/blob/main/20...
>The above statistics include all user requests from web, APP, and API. If all tokens were billed at DeepSeek-R1’s pricing (*), the total daily revenue would be $562,027, with a cost profit margin of 545%.
Not only that, there are other providers hosting these opensource models, there are so many companies - just go to openrouter.com
So this is your skepticism
- openai is subsidising their models so much that each year the keep doing it 20x and eventually reached 100x reduction
- all the investors are stupid and they still invest in openai despite unprofitability
- employees of openai and anthropic who have claimed that the unit costs are not high are also lying
- all other providers are in on the lie
- the chinese models like Deepseek is also in on the lie by posting research that is not plausible
- the fact that you can run models in your laptop today that beat previous years models is also not enough
> openai is subsidising their models so much that each year the keep doing it 20x and eventually reached 100x reduction
If that's the truth, then originally they were subsidizing their models by the same factors.
This is not a great argument no matter how you cut it. And even then I would need to see evidence that this is true.
> all the investors are stupid and they still invest in openai despite unprofitability
Much to the opposite, those people are very smart. OpenAI can be extremely unprofitable and they can still profit massively through an exit event.
> employees of openai and anthropic who have claimed that the unit costs are not high are also lying
Possibly? Especially if they are in the position to profit in the case of an exit event, they would have every incentive to paint a rosier picture about the company.
> all other providers are in on the lie
I have no idea who you are talking about.
> the chinese models like Deepseek is also in on the lie by posting research that is not plausible
As I previously stated, I have no idea if Deepseek is profitable. By the looks of things, neither do you. Mentioning Deepseek's research is a non-sequitur.
> the fact that you can run models in your laptop today that beat previous years models is also not enough
This has no bearing on the cost of inference.
I don't think Ed doesn't comment about the actual tech. Here are some things he has said before and please tell me if these still hold in the spirit?
> You cannot "fix" hallucinations (the times when a model authoritatively tells you something that isn't true, or creates a picture of something that isn't right), because these models are predicting things based off of tags in a dataset, which it might be able to do well but can never do so flawlessly or reliably.
ChatGPT is fairly reliable.
>Deep Research has the same problem as every other generative AI product. These models don't know anything, and thus everything they do — even "reading" and "browsing" the web — is limited by their training data and probabilistic models that can say "this is an article about a subject" and posit their relevance, but not truly understand their contents. Deep Research repeatedly citing SEO-bait as a primary source proves that these models, even when grinding their gears as hard as humanely possible, are exceedingly mediocre, deeply untrustworthy, and ultimately useless.
This is untrue in spirit.
> You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
Imagine if they’d done something else.
Imagine if they’d done anything else.
Imagine if they’d have decided to unite around something other than the idea that they needed to continue growing.
Imagine, because right now that’s the closest you’re going to fucking get.
This is what he said in 2024. He really thought ChatGPT is not in the future.
There are so many examples and its clear that he's not good faith and has consistently gotten the spirit wrong.
This guy sounds like an uninformed jackass.
Look at Gemini 3.1 Pro on the AA-Omniscience Index, which measures hallucinations. It's 30, previous best was 11.
https://artificialanalysis.ai/evaluations/omniscience
With the amount of talent working on this problem, you would be unwise to bet against it being solved, for any reasonable definition of solved.
> With the amount of talent working on this problem, you would be unwise to bet against it being solved, for any reasonable definition of solved.
I'm honestly not sure how this issue could be solved. Like, fundamentally LLMs are next (or N-forward) token predictors. They don't have any way (in and of themselves) to ground their token generations, and given that token N is dependent on all of tokens (1...n-1) then small discrepancies can easily spiral out of control.
To solve it doesn't mean we have to eliminate it completely. I think GPT has solved it to enough extent that it is reliable. You can't get it to easily hallucinate.
It depends on how much context is in the training data. I find that they make stuff up more in places where there isn't enough context (so more often in internal $work stuff).
Which success? I still see those things churning out laughably wrong code at every turn.
It's not a point and click code machine, but it's laughably wrong to say they just churn out laughably wrong code.
Half of developers are below average. Half of developers say that the code Ai produces is amazing. Would you like a venn diagram?
Latest developer surveys (StackOverflow, DORA, DX, Pragmatic Engineer, etc.) show AI adoption up to 85 - 90%. Can you incorporate that into the venn diagram? ;-)
minor correction: they say AI produces code that is 'mostly' amazing.
You’re saying that AI is already good enough to replace 50% of developers. Sounds like you agree it will be very important.
He's saying the good half of developers have to deal with the increased slop output of the bad half. Probably will be overwhelmed by it, in the end.
It _can_ produce slop if people stop thinking. I've also seen it do just fine, when people know when, where and how to use it. That's the part that frightens me, not the code it makes itself.
I will have to bring out the Venn diagram.
Go ahead!
You're telling on yourself here
I guess everyone stopped using Claude Code already then, if it doesn't work.
It doesn't work precisely because people are using claude.
If it worked, there'd be no people using it.
Nobody goes there, it's too crowded.
"It's not revolutionary automation if every 'automator' has an operator attached" - doesn't take a Yogi to figure out...
[flagged]
[flagged]
"Success" is a very relative statement in this context.
Some one should compile concrete predictions that he made vs how they turned out.
He hedges so much that it's probably impossible to catch him in a contradiction or missed prediction. It must be all that practice running a PR firm for AI companies.
its not that hard really
>You can fight with me on semantics, on claiming valuations are high and how many users ChatGPT has, but look at the products and tell me any of this is really the future.
Imagine if they’d done something else.
Imagine if they’d done anything else.
Imagine if they’d have decided to unite around something other than the idea that they needed to continue growing.
Imagine, because right now that’s the closest you’re going to fucking get.
This is what he said. Clearly wrong in spirit.
Have they been successful?
This is not off-topic at all!
> "What if our AI bullishness continues to be right...and what if that’s actually bearish" - what if pee pee was poo poo
Despite the vulgarity, it is exceptionally illuminating to how much some of these slop pieces are just a mere pretension of rhetoric. I see this pretty consistently with a lot of the material I come across on the job that's gone through the LLM meat-grinder.
Also, the comment made me giggle like a little kid.
What's pretend-rhetoric about it? They're positing agents will prove to be very capable, but that this would ultimately be a bad thing by automating away too much of the economy. You can argue whether that's plausible or not, but it isn't an incoherent or vapid argument.
I suggest you read the annotation if that question isn't just rhetorical. I'm not familiar with Ed, but he has a pretty good take down in here if you can get past his somewhat juvenile writing style.
It is a problem when your doomsday timeline for obsolescence is behind the minute you publish. The memo itself was fantasy doomer porn on day 1.
Ed's main thesis is that cost is unsustainable for AI companies but this is clearly wrong.
The unit cost is going down and has gone down by more than 20-30x over the years. Sure, the fixed cost of training is going up but that's because of the implied returns. Once the returns to training don't happen, it would simply reduce modulo cutoff date updates. The companies have a choice to just stop training and focus on inference cost reduction.
What am I missing here? Unless the consumers decide that they are no longer willing to pay the same amount as before and their expectations are rising with prices falling, what else?
Is that the cost per token or the actual cost of the user having a conversation, reasoning and all?
Cost per defined capability. Meaning you fix the task and then find how much it cost to achieve it including reasoning, tokens etc.
He is funny and entertaining but does he provide constructive investment advice? I am not so sure.
I've also heard Cory Doctorow recently offer a similarly dismissive view, describing AI as "just statistics".
> I've also heard Cory Doctorow recently offer a similarly dismissive view, describing AI as "just statistics".
Well, AI partisans have applied grandiose terms like "thinking," "intelligence," and "soul" to these machines. It's not wrong to push back and remind people what they really are.
Well, if you look into the current scientific consensus on human cognition, our intelligence is also "just statistics."
So I guess we all agree, except that some people think "just statistics" is derogatory phrasing!
What a software engineer comment.
Where does the title come from?
What is this document?
What is the context?
Good questions. Here is the author's BlueSky post about it:
https://bsky.app/profile/edzitron.com/post/3mfkc63h6222l
> "Here is an annotated version of the Citrini Memo with my own intro. It is analyslop - scare-fiction written to ingratiate AI boosters and analysts/traders with tales of ultra-automation and socialist data center policies. Shameful that the markets reacted at all."
It’s sort of disappointing to me how on both sides it seems hard to have any sort of rational perspective. I find both the Citrini memo (and the subsequent market reaction) and Ed Zitron’s critique of it to be wildly off-base.
I wish everyone would just calm down a bit.
I've started to feel like Ed Zitron is actively hurting people I care about.
I'm lucky to have worked in the field for a long time, and be able to spend a lot of tokens. In the last month it's become clear to me that the tech works. The science is done, and what's left is engineering.
There are a lot of risks and mitigations and theory to build, but it's all solvable. The tech isn't mature, but neither was the Internet 30 years ago. And we built transatlantic cables and ran new wires to everyone's house.
People I care about, engineers with 20 years of experience, are having mental health breakdowns, caused by Zitron's work. They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated. I'm trying to be supportive and help them start to recover, but it's slow going.
If someone is having a crisis about this, I hope they start talking to a therapist. I don't need them to agree with me, but I do need them to not harm themselves.
> They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated.
They can always learn the technology later, when and if it proves itself to be useful :) I personally don't understand the hype, even after using Claude and other AI tools - but perhaps that will change in the future.
If your company offer 'training' with 'AI expert' and 'prompt engineers', I urge you to attend. It's very gratifying, it cure imposter syndrome, and you will understand who is behind the hype and their technical level.
(And it is already useful, just not as much as some people sell it)
> I urge you to attend
Of course :) It’s interesting to hear the ideas people come up with, but so far no one has demonstrated any practical results that would significantly improve the quality of work in my field. It has, however, increased the amount of slop that I need to deal with on a daily basis. Worse yet, it is not always programming slop :)
> And it is already useful, just not as much as some people sell it
In a general context, I agree. When it comes to programming, however, my experience has been different. If this technology were presented more modestly / realistically, it likely wouldn’t have attracted billions of dollars in investment and the hype. I think this is exactly what many sensible people point to when debating whether this is a bubble :)
Not sure how this comment got upvoted; calling skepticism of an emerging industry a "mental breakdown" and suggesting those "suffering" from it to talk to a therapist doesn't really clear the bar for discussion here. This reads more like a manager being salty that their team isn't using up all the Grok budget this quarter or whatever.
And let it be clear that nobody is being "actively hurt" by legitimate economic/business grievances. This is victim-blaming and disgusting rhetoric.
> They insist the tech will never work, and avoid learning about it, becoming progressively more paranoid and isolated. I'm trying to be supportive and help them start to recover, but it's slow going.
If you are right, and the tech works, both you and them will be continuing this conversation in a soup kitchen.
More likely a mass grave
There's nothing to recover from, what are you even talking about? I'm not a token user (and I can't make predictions about the future and whether it will force me to use token but still). That the industry is collectively having a delusion about what constitutes good software (in all senses of the word - functionality and consequences for society) is clear to see, something I too fear we might never recover from, but I stand quite clearly on the side of people not of corporations hoping to extract more more more.
nice darvo, mate.
The internet 30 years ago worked great, what are you talking about.
[flagged]
Good to see people are finally turning against this grifter.
"AI fake, AI poo poo, AI going away!" is the only argument he ever had. Nothing more.
Did you actually read the articles he made going through the finances of these companies? He definitely has a bone to pick, but his numbers don't lie. The amount of return these AIs need to give due to the amount of spend is so ridiculous that unless they really do automate most jobs, they're screwed. There's a reason these companies only post AI revenue now, not profit.
Bubble doomerism is nothing novel. As is always the case, he's right vertically and wrong horizontally. Serious people in serious publications still speculated that the internet was a fad and would be over soon as late as 2008.
OpenAI will collapse, almost certainly. Anthropic might get by if they can make it to IPO before it all comes tumbling down. Google will buy up all the datacenters in a fire sale like they did with dark fiber after the .com bubble popped and continue building out stuff like NotebookLM.
Amazon and Microsoft will still be there selling server time to model providers and doing custom enterprise solutions like always. They already host the major proprietary models and sell API access.[0]
The top open models are already good enough. At this point prompting and coordination are the big bottlenecks. It would be nice if the bubble lasts long enough for open models to match at least the latest Opus.
His problem is the focus on the bubble and not on what usually happens after. People will bandy his pieces about insisting it's all short lived and they can just wait it out. Kimi K2.5, GLM 5, and MiniMax 2.5 aren't going away.
[0] For example: https://azure.microsoft.com/en-us/blog/claude-opus-4-6-anthr...
https://aws.amazon.com/about-aws/whats-new/2026/2/claude-opu...
>The top open models are already good enough.
Rare opportunity for me to actually downplay frontier AI for a change. We can do a lot better. I think the next 6 months will be a stream of releases that shall leave all the current models in the dust. Opus 4.6 will be no more relevant than 3.5 Sonnet.
If this is the case, all bubble talk will have to be re-evaluated.
Ed Zitron, from what little I have heard of him, seems incredibly irrational. I don't think I've ever seen anybody stick their head deeper in the sand more than I've seen him do.
It's one thing to dislike or even detest something, but to constantly claim it is worthless and without use when people are already benefitting from it everyday is nothing short of delusion.
>from what little I have heard of him
That's an interesting way to start criticism about ignorance
Crafted by Rajat
Source Code