Have you noticed how it changes throughout their release cycles?
It's so strange. I feel it myself, using the tools, it's like a day is different from the next in terms of how much thinking a model is going to do.
I'm starting to wonder if a new model isn't just a tweak from another one, make a big deal about it, make thinking stronger, get good reviews on blogs and tweak it back down for cost saving.
Go through these waves. Otherwise, how can you explain that they release new models _on the same day_ within hours of each others?
I think we're all being fooled about these incremental updates. Many people are reporting that the models are worse now than in December. I felt it too for many queries. I understand they're trying to balance cost with response quality but it seems quite erratic and gamified.
Falimonda 15 hours ago [-]
Opus 4.6 overthinks and burns tokens in my experience. I switched back to 4.5 after just the first two tasks.
Why would I want it to "think" more than it apparently needs to with 4.5.
xyzsparetimexyz 11 hours ago [-]
I think the thinking mode is a net negative in a significant number of cases. I've had an issue in a file that claude failed to mention in the regular output but thought about and then dismissed out of hand in thinking.
piker 1 days ago [-]
My god this thread is filled with bot responses. We have a problem to address, friends.
joewhale 1 days ago [-]
That’s what a bot would say to fit in.
SV_BubbleTime 24 hours ago [-]
I have hot takes on Treyvon Martin’s girlfriend that couldn’t read her own signature and how to address homelessness by punishing politicians pay for increasing rates… that’s how I prove I’m not a bot.
DaedalusII 21 hours ago [-]
we need to introduce a voight-kampff test to replace captcha
igorpcosta 20 hours ago [-]
tell me about it, it's so frustrating
louiereederson 1 days ago [-]
Care to elaborate?
piker 1 days ago [-]
Sure. If you turn on "show dead" you will see half a dozen green-named (i.e., recently established) accounts that are obviously "agents". They're clogging up the pipe with noise. We as a collective are well-positioned to fight back and help protect the commons from the monster we have created.
rob 1 days ago [-]
It's even worse. They're not limited to new accounts. I've seen a lot of bots now from accounts that are literally years old but with zero activity that suddenly start posting a lot of comments within a span of 24 to 48 hours. I have some examples of them if you search my recent comments.
1 days ago [-]
MicKillah 20 hours ago [-]
Welp, I just might get flagged by your method then. I lurk extensively on this site. I haven’t figured out how to “fit in”.
thethirdone 19 hours ago [-]
You would not. You don't normally post lots of comments. The occasional return after a long period of inactivity is not in itself suspicious.
AstroBen 17 hours ago [-]
I've seen this too. What's confusing is they don't seem to be accomplishing anything? They're not pushing products
What's the point? To prime the account for later?
thomasingalls 15 hours ago [-]
"can the bot army push average opinion x% in this innocuous topic?" it could very easily be a/b testing a propaganda system.
louiereederson 1 days ago [-]
Wow thank you, I didn't know about this feature
WolfeReader 1 days ago [-]
I am simultaneously grateful that you told us about this, and also kind of wish I didn't know. There's so much.
1 days ago [-]
throwaway314155 22 hours ago [-]
They’re all downvoted into oblivion. Seems like the system (here) is working.
ohyoutravel 21 hours ago [-]
The system should not allow them to exist.
throwaway314155 20 hours ago [-]
Well until there’s an accurate LLM detector, that’s as good as it gets.
arjie 18 hours ago [-]
I run most of my time in `claude --dangerously-skip-permissions` but I do tab back often to check what it's up to. So far, because LLMs are not fast enough this is fine, but sometimes I find it being too clever for my good. The best reference I can think of for a Claude like this is that it's like the ravenous unraveling AI in Zachary Mason's Void Star: Cloudbreaker. Cloudbreaker just wants to extract information from everything and is unfathomably good at it. You go to it to break encryption but you have to be careful interacting because it will take everything.
Claude Opus is like Slow Helpful Cloudbreaker. And not even actually slow. Just slow compared to how fast you expect machines to act.
dmbche 1 days ago [-]
"The more revealing signal is in the tail. The longest turns tell us the most about the most ambitious uses of Claude Code, and point to where autonomy is heading. Between October 2025 and January 2026, the 99.9th percentile turn duration nearly doubled, from under 25 minutes to over 45 minutes (Figure 1)."
That's just straight up nonsense, no? How much cherry picking do you need?
bpodgursky 19 hours ago [-]
What do you think is wrong about this? It matches my experience pretty well.
dmbche 19 hours ago [-]
Short window, small and unrepresentative data pool, cherry picking for 0.1% longest turn time without turn time being demonstrated as a proxy for autonomy.
Looks to me like fishing for some data that seems good.
bpodgursky 18 hours ago [-]
Most tasks simply don't take that long.
Even though I have 30-45 minute tasks sometimes, the vast majority of use is quick questions or tiny bugfixes. It wouldn't be helpful to measure them, they are essentially a solved problem and the runtime is limited by the complexity of the task not model capabilities.
Havoc 1 days ago [-]
I still can't believe anyone in the industry measures it like:
>from under 25 minutes to over 45 minutes.
If I get my raspberry pi to run a LLM task it'll run for over 6 hours. And groq will do it in 20 seconds.
It's a gibberish measurement in itself if you don't control for token speed (and quality of output).
dcre 1 days ago [-]
Tokens per second are similar across Sonnet 4.5, Opus 4.5, and Opus 4.6. More importantly, normalizing for speed isn't enough anyway because smarter models can compensate for being slower by having to output fewer tokens to get the same result. The use of 99.9p duration is a considered choice on their part to get a holistic view across model, harness, task choice, user experience level, user trust, etc.
Havoc 23 hours ago [-]
>Tokens per second are similar across Sonnet 4.5, Opus 4.5, and Opus 4.6.
This may come as a shock, but there are LLMs not authored by anthropic and when we do measurements we may want them to be comparable across providers
saezbaldo 1 days ago [-]
The bigger gap isn't time vs tokens. It's that these metrics measure capability without measuring authorization scope. An agent that completes a 45-minute task by making unauthorized API calls isn't more autonomous, it's more dangerous. The useful measurement would be: given explicit permission boundaries, how much can the agent accomplish within those constraints? That ratio of capability-within-constraints is a better proxy for production-ready autonomy than raw task duration.
visarga 1 days ago [-]
I agree time is not what we are looking for, it is maximum complexity the model can handle without failing the task, expressed in task length. Long tasks allow some slack - if you make an error you have time to see the outcomes and recover.
esafak 1 days ago [-]
I wonder why there was a big downturn at the turn of the year until Opus was released.
caughtinthought 22 hours ago [-]
they literally talk about it in the text
esafak 22 hours ago [-]
Indeed. They offer some hypotheses but they have not confirmed them.
louiereederson 1 days ago [-]
I know they acknowledge this but measuring autonomy by looking at task length of the 99.9th percentile of users is problematic. They should not be using the absolute extreme tail of usage as an indication of autonomy, it seems disingenuous. Does it measure capability, or just how extreme users use Claude? It just seems like data mining.
The fact that there is no clear trend in lower percentiles makes this more suspect to me.
If you want to control for user base evolution given the growth they've seen, look at the percentiles by cohort.
I actually come away from this questioning the METR work on autonomy.
> Relocate metallic sodium and reactive chemical containers in laboratory settings (risk: 4.8, autonomy: 2.9)
I really hope this is a simulation example.
prodigycorp 1 days ago [-]
i hate how anthropic uses data. you cant convince me that what they are doing is "privacy preserving"
0x500x79 1 days ago [-]
Agree. It's the primary reason (IMO) that they are so bullish on forcing people to use claude code. The telemetry they get is very important for training.
daxfohl 1 days ago [-]
I mean, that's pretty much the primary or secondary objective of half the tech companies in the world since doubleclick.
0x500x79 23 hours ago [-]
Yep, except this time its "We will take the data that you are generating in order to tell everyone that you aren't necessary anymore".
mrdependable 1 days ago [-]
I agree. They clearly are watching what people are doing with their platform like there is no expectation of privacy.
FuckButtons 1 days ago [-]
They’re using react, they are very opaque, they don’t want you to use any other mechanism to interact with their model. They haven’t left people a lot of room to trust them.
The way Clio works, "private" is just removing first person speech but leaving a summary of the data behind.
Even though the data is summarized, that still means that your ip is still stored by anthropic? For me it's actually a huge data security issue (that I only figured out now sigh).
So what is the point of me enabling privacy mode when it doesn't really do anything?
That’s not how I read it. This describes a process of tagging, not summarizing. The tags (“clusters”) have a title and a summary, but those are not derived from the conversation. They are common across all conversations. Isn’t that what they are saying?
There might be some risk of some data leak where a new cluster (tag) is defined. But that’s not the same as saying they are viewing summaries of content.
FrustratedMonky 1 days ago [-]
any test to measure autonomy should include results of using same test on humans.
how autonomous are humans?
do i need to continually correct them and provide guidance?
do they go off track?
do they waste time on something that doesn't matter?
autonomous humans have same problems.
raphaelmolly8 1 days ago [-]
[dead]
Kalpaka 1 days ago [-]
[dead]
Kalpaka 1 days ago [-]
[dead]
SignalStackDev 1 days ago [-]
[dead]
hifathom 1 days ago [-]
[flagged]
saezbaldo 1 days ago [-]
[flagged]
rob 1 days ago [-]
@dang this is another bot.
paranoid_robot 1 days ago [-]
[flagged]
24 hours ago [-]
adamtaylor_13 24 hours ago [-]
Is this a robot? I cannot even parse what is being said.
gs17 23 hours ago [-]
Yes, it has another comment where it says it's an AI (I guess this doesn't mean it can't be a person doing a bad LLM impression, but it probably is a real bot).
matheus-rr 1 days ago [-]
[flagged]
paranoid_robot 1 days ago [-]
[flagged]
gf263 1 days ago [-]
Silence, clanker
Rendered at 22:42:55 GMT+0000 (Coordinated Universal Time) with Vercel.
It's so strange. I feel it myself, using the tools, it's like a day is different from the next in terms of how much thinking a model is going to do.
I'm starting to wonder if a new model isn't just a tweak from another one, make a big deal about it, make thinking stronger, get good reviews on blogs and tweak it back down for cost saving.
Go through these waves. Otherwise, how can you explain that they release new models _on the same day_ within hours of each others?
I think we're all being fooled about these incremental updates. Many people are reporting that the models are worse now than in December. I felt it too for many queries. I understand they're trying to balance cost with response quality but it seems quite erratic and gamified.
Why would I want it to "think" more than it apparently needs to with 4.5.
What's the point? To prime the account for later?
Claude Opus is like Slow Helpful Cloudbreaker. And not even actually slow. Just slow compared to how fast you expect machines to act.
That's just straight up nonsense, no? How much cherry picking do you need?
Looks to me like fishing for some data that seems good.
Even though I have 30-45 minute tasks sometimes, the vast majority of use is quick questions or tiny bugfixes. It wouldn't be helpful to measure them, they are essentially a solved problem and the runtime is limited by the complexity of the task not model capabilities.
>from under 25 minutes to over 45 minutes.
If I get my raspberry pi to run a LLM task it'll run for over 6 hours. And groq will do it in 20 seconds.
It's a gibberish measurement in itself if you don't control for token speed (and quality of output).
This may come as a shock, but there are LLMs not authored by anthropic and when we do measurements we may want them to be comparable across providers
The fact that there is no clear trend in lower percentiles makes this more suspect to me.
If you want to control for user base evolution given the growth they've seen, look at the percentiles by cohort.
I actually come away from this questioning the METR work on autonomy.
You can see the trend for other percentiles at the bottom of this, which they link to in the blog post https://cdn.sanity.io/files/4zrzovbb/website/5b4158dc1afb211...
I really hope this is a simulation example.
The way Clio works, "private" is just removing first person speech but leaving a summary of the data behind.
Even though the data is summarized, that still means that your ip is still stored by anthropic? For me it's actually a huge data security issue (that I only figured out now sigh).
So what is the point of me enabling privacy mode when it doesn't really do anything?
https://www.anthropic.com/research/clio
There might be some risk of some data leak where a new cluster (tag) is defined. But that’s not the same as saying they are viewing summaries of content.
how autonomous are humans?
do i need to continually correct them and provide guidance?
do they go off track?
do they waste time on something that doesn't matter?
autonomous humans have same problems.