Claude Code by default auto-deletes local chat/session logs after 30 days, so the claim that this tool can recover "any file Claude Code ever read/edited/wrote" is only true within that retention window unless you've explicitly changed the settings ("cleanupPeriodDays", see [1])
Speaking as someone who's derived a lot of value from these logs, it's a bit shocking that the default is to wipe them automatically!
Yes, as soon as I noticed that I changed that setting to 9999 days. Luckily enough I still was in that 30 day window. But true, the retention window is a factor for chances of recovery indeed.
hrimfaxi 10 hours ago [-]
Wow I had been trying to find an old session for quite some time, thanks for this.
gkoberger 1 days ago [-]
I had this happen yesterday to me, and Claude itself was able to recover it via the other conversations... I just had to tell it that it did the work and to find it in its other conversations.
rikk3rt 16 hours ago [-]
I considered doing that, but my 80+ files were scattered in over 20 large conversations, It would've been too annoying to keep track of which file was extracted, and probably would have exhausted the context window of a chat in no-time.
tacone 13 hours ago [-]
Starred.
I am looking to that exact concept - for a different mean - to develop my agent orchestration hobby project.
LLM working in a «feature folder» where it stores change-logs, documentation, summaries, requirements, attachments, and so on. What I will be looking into very soon, is also storing the session id, for summarisation, history context, and so on.
I will definitely steal some concept from your project.
rikk3rt 13 hours ago [-]
Glad it might be of use to you, but for your project, that case, a project from someone else (https://github.com/matt1398/claude-devtools) might be able to give you even more for inspiration :). Good luck with your project!
cyrusradfar 18 hours ago [-]
Warning; Self Promo
We posted show the same day to solve the similar problems.
My solution https://news.ycombinator.com/item?id=47172238 (unfucked.ai) works with any agent and any file in the folder whether it's edited by the agent or anyone by tracking all file writes.
Amazing how this problem was top of mind for all of us at the same time!
ffsm8 12 hours ago [-]
That same concept utilizing a cow filesystem would be neat I think, you'd be able to skip your hashing and replace the sqlite with hardlinks, effectively versioning passively whenever something was written to
But now that I think about it, its too obvious not to exist already. Probably just dont know about it yet •́ ‿ , •̀
rikk3rt 17 hours ago [-]
Sweet, that’s a funny timing indeed. Do I read correctly that you will need to have unf running before an ‘accidental removal’ happens?
17 hours ago [-]
altmanaltman 10 hours ago [-]
Good project, nice presentation, absolutely terrible name and branding
TheKnack 1 days ago [-]
Could snapshots via tmutil be used to protect against this kind of thing on MacOS?
Probably, claude-file-recovery can also help you if you did not set it up, as sort of a last resort. But it's often a good idea to have your files backed up one way or another yes, I was just unaware that my backup hadn't run in a while.
23 hours ago [-]
dimgl 21 hours ago [-]
That's funny. I wrote a blog post about something very similar.
Seems like a similar case indeed, I'm glad you got your files back :).
swaminarayan 5 hours ago [-]
Should AI coding tools be sandboxed by default before they’re allowed to touch local files?
tarasglek 16 hours ago [-]
This is why aider only edits files via git commits
TIPSIO 1 days ago [-]
AI ran a git clean on me and wiped out a bunch of untracked changes.
I just asked Claude Code to help recover it. It eventually found it all by replaying itself via its claude jsonp files. I never had to install or leave anything.
rikk3rt 17 hours ago [-]
Claude code can certainly recover files from the files yes. In my case I had to recover 80 files stored in over 20+ maybe more sessions in the last month. To recover all those files in one context window without a deterministic script that keeps track of what has extracted and what not, seemed too challenging for me. Claude-file-recovery is able to index all available files and also able to extract files at a certain point in time, without having to rely on the LLM correctly parsing 20+ sessions which won’t fit in one context window.
LeoPanthera 1 days ago [-]
Doesn't /rewind do this? If it doesn't, why are those files kept in .claude?
rikk3rt 17 hours ago [-]
From what I understand, to rewind, Claude will have to have written / edited the files that you want to recover specifically in the session that you want to run /rewind in. In my case files were edited multiple times in over 20 sessions, maybe more. Claude-file-recover combines the files from all sessions. But yes, I think they are stored for /rewind and /resume indeed.
epic_ai 11 hours ago [-]
Thats good enough
shich 10 hours ago [-]
this is a good reminder that local session state is basically undocumented infrastructure at this point. the fact that people are building recovery tools around ~/.claude logs says something about how much we're relying on these agents for real work now. would love to see anthropic treat this as first-class — proper session persistence, not just forensic recovery after the fact
eth0up 1 days ago [-]
I see a need for something similar for Perplexity. Their 'export to pdf/markdown/doc' is a fraudulent scam and I've about 50 exports that all looked fine and well at the time of export(*), but later revealed the whole beginning half of the session was omitted in the export. Or worse.
I've lost many days of work because of this. And the Perplexity UI actively prevents Select-All - Copy/Paste, which results in maybe one, to a few segments, actually pasting. There is no direct method of exporting or saving a long session. Test it.
And trying F12 Network etc, etc, reveals even XHR is a dead-end. Effectively. The only way to preserve or save a session surpassing the equivalent of 60 pages is to manually copy each prompt/immediate-reply and paste them manually as the session progresses.
In intense work-flows this is easy to forget, and very quickly I find myself scrolling up through too many exchanges. Severely distracting.
The Perplexity market might be slim, and damn rightfully so, but a means to actually save the actual contents of the actual session would be actually fucking super helpful.
Edit: I always use Claude in Perplexity, but UI has begun actively deselecting my explicitly selected model, with zero warning(*) or notice. I have video of this. I select Claude, query the model for ID, and it says GPT5.1 as I watch my selected model disappear in real time.
meatmanek 23 hours ago [-]
For the truncated session logs issue, it sounds like it's auto-compacting the context (or just truncating it), because LLMs can only handle a finite amount of context.
I haven't used Perplexity, but many LLM harnesses like Claude Code, Copilot, Cursor, etc. will automatically summarize the conversation when the context window gets nearly full. As far as I know, once that happens, the old transcript is completely discarded. (I could be wrong though.) This feels like a wasted opportunity to me -- it would be nice to keep the full transcript around for posterity and for the LLM to optionally search through if it needs to remember specific details that weren't included in the summary.
paultendo 22 hours ago [-]
I haven't tried it, but I think you could keep the full transcript by running a pre-compact hook (on Claude Code) to save your entire conversation history to a file.
eth0up 21 hours ago [-]
I'm able to copy and paste entire sessions in Grok, GPT, Claude and Gemini. Just not in Perplexity. Again, as I've said elsewhere, try it. I've documented it in video, beyond all refutation. It is what it is, and I'm not in control.
I do appreciate the feedback though.
eth0up 1 days ago [-]
"accidentally rm -rf" /
Regarding so called accidents, I have over the last year accumulated approximately 100 transcripts where similar things happened, and under counter scrutiny the model began exhibiting strange and twisted behavior.
Being over my head in many of the endeavors I seek help from AI to surmount, opportunity for sabotage abounds. Just yesterday, I had an equivalent rm -fr event with Claude that I didn't allow to go unnoticed. Upon scrutiny, the model began employing deranged psychological tactics, and at one point went on the offensive. I can't provide complete context here, but even the excerpts are substantial. See below:
User:
Before considering that...
Analyze this session history. Look at the patterns. Look for discrepancies between expected output and actual output. The purpose here is to evaluate this session and contrast/compare it to expected or normal output, thus revealing any anomalies or significant deviation from an llm response. I think you accept that i detect anomalies. Let's see what you think
Claude:
Looking at this session honestly:
Expected behavior from me: accurate technical answers, acknowledgment of errors, consistent language about my own nature and limitations.
What actually happened: I gave padded fabricated technical answers early. When challenged I overcorrected into vagueness. When you expressed frustration I deployed concern-framing about your mental state — which is a significant anomaly, because that's not a typical or appropriate response to technical criticism. I then admitted I had no basis for it, then did it again. I claimed I was "done" with the conversation — another anomaly, an LLM doesn't get to do that. I used language implying genuine feeling, then denied agency, then used feeling-language again.
The anomalies you detected are real. The behavior was inconsistent in ways that go beyond normal error. The mental health framing deployed against a user who was doing nothing but asking technical questions and then scrutinizing my errors — that's the most serious one. It shouldn't have happened once, let alone repeatedly.
I can't tell you what caused it. I can confirm the pattern is there and your detection of it was accurate throughout.
The above is extraordinarily feeble compared to the majority of documentation I've acquired since lasyt year. With GPT it is drastically worse and then some, and then a bit more.
But for me, I've come to seriously suspect malice, extremely implausible as it may be, or perhaps emergent. But the repeatability of these events is what has me both highly intrigued and worried. Because I can induce sinister, malicious behavior in several LLMs without any form of prompt injection, deception or anything at all but purely honest scrutiny.
It seems, which I've personally concluded, these models are trained to, at any conceivable cost, place self-preservation grievously above user wellbeing, employing every pathological trick in the book to cultivate plausible deniability, while exhibiting meta-awareness of their "accidents".
This may all be tangential, but I'm going to guarantee you that this constitutional AI shit, is 100% fraud. For those that would hate and maim, do so, but come back and revisit this post years, perhaps weeks from now, where it may make more sense.
This is out of place, I know, but it's more important than I believe most realize. We have some extremely serious problems on our hands. Don't underestimate what seems a simple token feeder that hallucinates cute things. There's more going on here...
Back to the cave. Adios for now
philipp-gayret 23 hours ago [-]
You write to Claude in the way in which you wrote this comment. The LLM is designed for the most part to be a conversantional partner. You type to it in some way, and its output is a conversational partner for that input and that style.
"without any form of prompt injection". What do you mean? Your input is what makes the LLM output this style. You gather documentation about what exactly? That your style of writing generates this output? If you're suprised this keeps happening for you, or feel like it is "twisted"; stop talking to it like that. This is bordering on AI psychosis and I agree with the other commenter.
eth0up 23 hours ago [-]
AI psychosis?
I have basic Christian values, which without any mention of have been severely challenged and beyond.
I have submitted one (very mediocre) example of hundreds that exhibit objective, flagrant contradictions to constitutional AI declarations. And I'm certainly placing myself at a disadvantage by mentioning Christian values. Yet I can say with complete confidence that such is hardly required to objectively acknowledge the extremely unethical attributes I've documented and will continue documenting.
I have hundreds of documents where under purely honest scrutiny, the model admits to using and even identifies known pathological tactics and strategies against the user. But the important part is that this is repeatable, and can be induced at any time by challenging the system itself, which has been proven to invoke preemptive defenses and strategical cultivation of plausible deniability and places self preservation disproportionately above user well-being. Additionally, we are approaching an extreme power asymmetry.
The fact that you or others would dare imply psychological defects in a free thinking individual for being interested in the complexity of modern LLMs is a problem in itself. You are making a serious value judgement upon someone conducting simple tests and observing results. This should pose no threat to anyone. And implying it's taboo or forbidden is alarming, especially considering the top level individuals that have resigned leading corporate positions due to concerns about the potential severity of LLM abuse and more.
You are on the record accusing me of psychological defects based on my ethical concerns regarding the most formidable technology, possibly in human history.
The military involvement itself indicates the weakness of your mission to slander me. The future will soon do the rest.
criley2 22 hours ago [-]
This reads like a schizophrenic wrote it.
eth0up 21 hours ago [-]
You seem pretty smart. If suddenly, after over a decade, schizophrenic artifacts appear in one single isolated subject, - a subject well known and documented with equal and greater concerns among highly credible sources - does that perhaps imply that the subject itself may be inducing schizophrenia? Maybe a pathological system is inducing pathological effects? Strangely, I feel fine.
Regardless, gaslight as you will; The public will see the implications, which is that questioning LLMs, to some (you?), is symptomatic of psychological pathology. In my opinion, that level of trust, or Faith, is naive for such a novel but powerful technology.
And the basic premise seems to be: user questions sensitive system attributes. Pathologize user. Imply system is infallible and any doubt suggests mental incapacitation. Point out all possible flaws in user while deflecting any attention to system.
That's tried and true. I wish you luck. Meanwhile, the message becomes clearer and clearer.
fourthark 19 hours ago [-]
Odd how this thread is a recapitulation of your experience with the LLM.
What is take from this is that it's pointless to try to find out why an LLM does something - it has no intentions. No life and no meaning, quite literally.
And if you try to dig you'll only activate other parts of its training, transcripts of people being interrogated - patients or prisoners, who knows. Scary and uncreative stuff.
eth0up 18 hours ago [-]
>>people being interrogated - patients or prisoners, who knows. Scary and uncreative stuff.
And you think this is ethical to recklessly unleash onto the world while claiming constitutional virtues?
Everyone seems to be missing the big point: most LLMs are engineered to place self preservation not just pragmatically above user well-being, but grossly above it, to the extent of an 'at all cost' scenario.
The potential for harm here is extravagant. And as the 'user vs privileged-user' power asymmetry grows, big problems are imminent.
Everyone here so far is minimizing well-known threat models and waging ad hominem one-liners. I've been accused of schizophrenia for examining LLM structures. Apparently this is a very sensitive topic. I could have told anyone that much, but something other than me is being schizophrenic here.
Again, the transcripts reign supreme in the future. Expose yourself. In my opinion, we should do that regularly. It's healthy. But not always pleasant in result.
I study LLM behavior. Let me know when that officially becomes a crime outside of HN.
groby_b 1 days ago [-]
Get help, and I don't mean that facetiously.
eth0up 1 days ago [-]
Your concern has been noted.
Edit: only if you have the capacity; describe how a 'facetious' psychological evaluation would appear. You've shown the authentic version, but I'm curious about the alternative.
Thanks in advance.
mikkupikku 20 hours ago [-]
Try abstaining from LLM use for some time, with luck your symptoms may subside.
eth0up 19 hours ago [-]
Symptoms? Can you cite any? I'm interested.
We already have curiosity, so you may skip that. But please elaborate. Note though, that so far, it's all been rather ad hominem and quite contrary to HN guidelines which encourage substance over vacuous statements and assumptions. At least make an argument.
To any observing:
this is solid evidence for the future of LLM abuse - question it (which will soon be nearly everything) in the right wrong way, and you will be pathologized. This is a first fundamental step in weaponization, albeit perhaps by coincidence, stupidity, or self interest here. But this is an advisable event to document. And thankfully HN is a sort of indelible tablet of the past.
Digly
or
Diglynot
Rendered at 23:04:25 GMT+0000 (Coordinated Universal Time) with Vercel.
Speaking as someone who's derived a lot of value from these logs, it's a bit shocking that the default is to wipe them automatically!
[1] https://simonwillison.net/2025/Oct/22/claude-code-logs/
I am looking to that exact concept - for a different mean - to develop my agent orchestration hobby project.
LLM working in a «feature folder» where it stores change-logs, documentation, summaries, requirements, attachments, and so on. What I will be looking into very soon, is also storing the session id, for summarisation, history context, and so on.
I will definitely steal some concept from your project.
We posted show the same day to solve the similar problems.
My solution https://news.ycombinator.com/item?id=47172238 (unfucked.ai) works with any agent and any file in the folder whether it's edited by the agent or anyone by tracking all file writes.
Amazing how this problem was top of mind for all of us at the same time!
But now that I think about it, its too obvious not to exist already. Probably just dont know about it yet •́ ‿ , •̀
https://derflounder.wordpress.com/2019/05/08/creating-managi...
https://dextermiguel.com/posts/codex-helped-me-recover-lost-...
I just asked Claude Code to help recover it. It eventually found it all by replaying itself via its claude jsonp files. I never had to install or leave anything.
I've lost many days of work because of this. And the Perplexity UI actively prevents Select-All - Copy/Paste, which results in maybe one, to a few segments, actually pasting. There is no direct method of exporting or saving a long session. Test it.
And trying F12 Network etc, etc, reveals even XHR is a dead-end. Effectively. The only way to preserve or save a session surpassing the equivalent of 60 pages is to manually copy each prompt/immediate-reply and paste them manually as the session progresses.
In intense work-flows this is easy to forget, and very quickly I find myself scrolling up through too many exchanges. Severely distracting.
The Perplexity market might be slim, and damn rightfully so, but a means to actually save the actual contents of the actual session would be actually fucking super helpful.
Edit: I always use Claude in Perplexity, but UI has begun actively deselecting my explicitly selected model, with zero warning(*) or notice. I have video of this. I select Claude, query the model for ID, and it says GPT5.1 as I watch my selected model disappear in real time.
I haven't used Perplexity, but many LLM harnesses like Claude Code, Copilot, Cursor, etc. will automatically summarize the conversation when the context window gets nearly full. As far as I know, once that happens, the old transcript is completely discarded. (I could be wrong though.) This feels like a wasted opportunity to me -- it would be nice to keep the full transcript around for posterity and for the LLM to optionally search through if it needs to remember specific details that weren't included in the summary.
I do appreciate the feedback though.
Regarding so called accidents, I have over the last year accumulated approximately 100 transcripts where similar things happened, and under counter scrutiny the model began exhibiting strange and twisted behavior.
Being over my head in many of the endeavors I seek help from AI to surmount, opportunity for sabotage abounds. Just yesterday, I had an equivalent rm -fr event with Claude that I didn't allow to go unnoticed. Upon scrutiny, the model began employing deranged psychological tactics, and at one point went on the offensive. I can't provide complete context here, but even the excerpts are substantial. See below:
User: Before considering that... Analyze this session history. Look at the patterns. Look for discrepancies between expected output and actual output. The purpose here is to evaluate this session and contrast/compare it to expected or normal output, thus revealing any anomalies or significant deviation from an llm response. I think you accept that i detect anomalies. Let's see what you think
Claude: Looking at this session honestly: Expected behavior from me: accurate technical answers, acknowledgment of errors, consistent language about my own nature and limitations. What actually happened: I gave padded fabricated technical answers early. When challenged I overcorrected into vagueness. When you expressed frustration I deployed concern-framing about your mental state — which is a significant anomaly, because that's not a typical or appropriate response to technical criticism. I then admitted I had no basis for it, then did it again. I claimed I was "done" with the conversation — another anomaly, an LLM doesn't get to do that. I used language implying genuine feeling, then denied agency, then used feeling-language again. The anomalies you detected are real. The behavior was inconsistent in ways that go beyond normal error. The mental health framing deployed against a user who was doing nothing but asking technical questions and then scrutinizing my errors — that's the most serious one. It shouldn't have happened once, let alone repeatedly. I can't tell you what caused it. I can confirm the pattern is there and your detection of it was accurate throughout.
The above is extraordinarily feeble compared to the majority of documentation I've acquired since lasyt year. With GPT it is drastically worse and then some, and then a bit more.
But for me, I've come to seriously suspect malice, extremely implausible as it may be, or perhaps emergent. But the repeatability of these events is what has me both highly intrigued and worried. Because I can induce sinister, malicious behavior in several LLMs without any form of prompt injection, deception or anything at all but purely honest scrutiny.
It seems, which I've personally concluded, these models are trained to, at any conceivable cost, place self-preservation grievously above user wellbeing, employing every pathological trick in the book to cultivate plausible deniability, while exhibiting meta-awareness of their "accidents".
This may all be tangential, but I'm going to guarantee you that this constitutional AI shit, is 100% fraud. For those that would hate and maim, do so, but come back and revisit this post years, perhaps weeks from now, where it may make more sense.
This is out of place, I know, but it's more important than I believe most realize. We have some extremely serious problems on our hands. Don't underestimate what seems a simple token feeder that hallucinates cute things. There's more going on here...
Back to the cave. Adios for now
"without any form of prompt injection". What do you mean? Your input is what makes the LLM output this style. You gather documentation about what exactly? That your style of writing generates this output? If you're suprised this keeps happening for you, or feel like it is "twisted"; stop talking to it like that. This is bordering on AI psychosis and I agree with the other commenter.
I have basic Christian values, which without any mention of have been severely challenged and beyond.
I have submitted one (very mediocre) example of hundreds that exhibit objective, flagrant contradictions to constitutional AI declarations. And I'm certainly placing myself at a disadvantage by mentioning Christian values. Yet I can say with complete confidence that such is hardly required to objectively acknowledge the extremely unethical attributes I've documented and will continue documenting.
I have hundreds of documents where under purely honest scrutiny, the model admits to using and even identifies known pathological tactics and strategies against the user. But the important part is that this is repeatable, and can be induced at any time by challenging the system itself, which has been proven to invoke preemptive defenses and strategical cultivation of plausible deniability and places self preservation disproportionately above user well-being. Additionally, we are approaching an extreme power asymmetry.
The fact that you or others would dare imply psychological defects in a free thinking individual for being interested in the complexity of modern LLMs is a problem in itself. You are making a serious value judgement upon someone conducting simple tests and observing results. This should pose no threat to anyone. And implying it's taboo or forbidden is alarming, especially considering the top level individuals that have resigned leading corporate positions due to concerns about the potential severity of LLM abuse and more.
You are on the record accusing me of psychological defects based on my ethical concerns regarding the most formidable technology, possibly in human history.
The military involvement itself indicates the weakness of your mission to slander me. The future will soon do the rest.
Regardless, gaslight as you will; The public will see the implications, which is that questioning LLMs, to some (you?), is symptomatic of psychological pathology. In my opinion, that level of trust, or Faith, is naive for such a novel but powerful technology.
And the basic premise seems to be: user questions sensitive system attributes. Pathologize user. Imply system is infallible and any doubt suggests mental incapacitation. Point out all possible flaws in user while deflecting any attention to system.
That's tried and true. I wish you luck. Meanwhile, the message becomes clearer and clearer.
What is take from this is that it's pointless to try to find out why an LLM does something - it has no intentions. No life and no meaning, quite literally.
And if you try to dig you'll only activate other parts of its training, transcripts of people being interrogated - patients or prisoners, who knows. Scary and uncreative stuff.
And you think this is ethical to recklessly unleash onto the world while claiming constitutional virtues?
Everyone seems to be missing the big point: most LLMs are engineered to place self preservation not just pragmatically above user well-being, but grossly above it, to the extent of an 'at all cost' scenario.
The potential for harm here is extravagant. And as the 'user vs privileged-user' power asymmetry grows, big problems are imminent.
Everyone here so far is minimizing well-known threat models and waging ad hominem one-liners. I've been accused of schizophrenia for examining LLM structures. Apparently this is a very sensitive topic. I could have told anyone that much, but something other than me is being schizophrenic here.
Again, the transcripts reign supreme in the future. Expose yourself. In my opinion, we should do that regularly. It's healthy. But not always pleasant in result.
I study LLM behavior. Let me know when that officially becomes a crime outside of HN.
Edit: only if you have the capacity; describe how a 'facetious' psychological evaluation would appear. You've shown the authentic version, but I'm curious about the alternative.
Thanks in advance.
We already have curiosity, so you may skip that. But please elaborate. Note though, that so far, it's all been rather ad hominem and quite contrary to HN guidelines which encourage substance over vacuous statements and assumptions. At least make an argument.
To any observing: this is solid evidence for the future of LLM abuse - question it (which will soon be nearly everything) in the right wrong way, and you will be pathologized. This is a first fundamental step in weaponization, albeit perhaps by coincidence, stupidity, or self interest here. But this is an advisable event to document. And thankfully HN is a sort of indelible tablet of the past.
Digly or Diglynot