hckrnws
I'm glad I have chatGPT to turn that image with benchmarks into an accessible table lol. I like claude Code, but their accessibility in anything other than accidental CLI accessibility is frustrating. Try it. Load a screen reader like VoiceOver for Mac (cause I know most programmers use Macs) and go to claude.ai. In the "write your prompt to Claude" box, type something like "What will the weather be like tomorrow?" and press Enter/Return. Try closing your eyes for a good 30 seconds and within those 30 seconds, tell me how you'd know if a reply has been given by the model. Then try the same thing with ChatGPT. I would /love/ to be proven wrong.
thanks for sharing! just tried it for the first time.. Anthropic should really do better
curious if the 1m context window will be default available in claude code. if so, that's a pretty big deal: "Sonnet 4.6’s 1M token context window is enough to hold entire codebases, lengthy contracts, or dozens of research papers in a single request. More importantly, Sonnet 4.6 reasons effectively across all that context."
Above 200k token context they charge a premium. I think its $10/M tokens of input.
Interesting. Is it because they can or is it really more expensive for them to process bigger context?
I've read that compute costs for LLMs go up O(n^2) with context window size. But I think it is also a combination of limited compute availability, users preference for Anthropic models and Anthropic planning to go IPO.
Attention is, at its core, quadratic wrt context length. So I'd believe that to be the case, yeah.
Opus 4.6 but cheaper
I really don't get these companies posting disingenuous benchmarks. Every time, they pick and choose who to compare against. Not comparing to the latest 5.3-codex is absurd when it's been out a couple of weeks now. Who are they trying to kid?
If you were writing a promotional post for your new model, would you include benchmarks of a competitor that's spanking you across the board? This is marketing.
There aren't really any of the typical benchmark suites targeting Codex 5.3 because it's still not in the API.
SWE bench for example creates a predictions file and evaluates the results in the harness. Without Codex 5.3 being in the API, it can't.
gpt-5.3-codex isn't available via the API yet. Pretty sure they were only testing via API access.
> Who are they trying to kid?
People who do not know how reproducible research works.
Any benchmark that is presented by AI labs must be reproduced reliably by someone else independent of that AI lab presenting these results.
Otherwise, not only it is biased, these numbers can be just made up for marketing purposes.
Discussion here apparently: https://news.ycombinator.com/item?id=47050488
I am not seeing it on claude-code yet
What happened to sonnet 5?
They're probably saving 5 for a bigger leap.
Those hours that with gentle work did frame The lovely gaze where every eye doth dwell, Will play the tyrants to the very same And that unfair which fairly doth excel:
So tldr it seems like it's
- a reasonable improvement over sonnet 4.5, esp. with agentic tool use
- generally worse than opus 4.6
Probably not worth it for coding, but a win for anybody building agentic ai assistants of any sort with Sonnet.
It’s similar to or better than Opus 4.5 as per benchmarks, while being 2x-3x cheaper, definitely worth it over Opus 4.6, if cost/tokens is the concern.
To remind, Opus 4.5 was SOTA 2-3 weeks ago.
Comment was deleted :(
Yes but Opus 4.6 is a massive step up. Some applications don’t need that power though.
Anthropic again running scared of the open weight models which are rapidly catching up to them. Not even Sonnet or Opus isn't going to help with that at all.
It has already happened with the music gen models already. It's only a matter of time when the open weight models will overtake Anthropic.
Expect them to dial up the scaremongering until they IPO. The Claude family of models are their only AI product that is keeping them alive.
What are the latest open music models?
Ace step 1.5 is great, only 1.5b params so very easy to run locally.
Chinese companies distilling frontier models is certainly a crisis but it isn't one that implies said Chinese companies are anywhere in the 'race'.
The "race" matters less than making money. If those Chinese models perform well in price/performance, AGI might as well pound sand.
Crafted by Rajat
Source Code