The dynamic system map (.md file) approach is the most interesting part of this to me. The hardest problem in automated alert triage isn't the deduplication or even the RCA -- it's that the agent doesn't know what "normal" looks like for your system.
I've seen teams try to solve this by feeding every alert directly into an LLM (like nojs describes above), and the failure mode is predictable: the model treats each alert as an isolated incident because it has no topology awareness. It doesn't know that Service A calling Service B with 500ms latency is fine on Tuesday mornings because of the batch job, but a P1 on Wednesday afternoons.
The real question is how you keep that system map accurate as architecture evolves. In my experience, the map drifts within weeks unless it's generated from runtime data (traces, dependency graphs from actual traffic) rather than maintained manually. Static architecture docs are lies within a quarter.
Also curious about the severity scoring -- are you using user impact signals (error rates on user-facing endpoints, session replay data) or purely technical signals? The gap between "this looks bad in logs" and "users are actually affected" is where most alert fatigue comes from. A 500 error on an internal health check endpoint generates the same Sentry noise as a 500 on the checkout flow, but they're not remotely the same priority.
Dimittri 4 hours ago [-]
Yes exactly! The purpose isn't to create one PR per alert—that would just move noise from one place to another. The bottleneck we're solving is triaging: cutting the noise and turning it into signal. Once we've done that, we fix issues and show you a PR.
I think it works well because we have two deduplication steps and we group based on RCA—both before and after Claude Code analysis.
Severity also helps cut noise by highlighting which problems/solutions to review first. It's easier when the issue comes from a frontend trigger, but many alerts come from backend errors only. In those cases, Claude Code assigns severity based on its understanding of how a broken feature blocks product usage.
nojs 24 hours ago [-]
> Especially curious about your current workflows when you receive an alert from any of these channels like Sentry (error tracking), Datadog (APM), or user feedback.
I have a github action that runs hourly. It pulls new issues from sentry, grabs as much json as it can from the API, and pipes it into claude. Claude is instructed to either make a PR, an issue, or add more logging data if it’s insufficient to diagnose.
I would say 30% of the PRs i can merge, the remainder the LLM has applied a bandaid fix without digging deep enough into the root cause.
Also the volume of sentry alerts is high, and the issues being fixed are often unimportant, so it tends to create a lot of “busy work”.
Dimittri 23 hours ago [-]
To avoid this 'busy work', we group alerts by RCA (so no duplicate PRs) and filter by severity (so no PRs for false positives or not-that-important issue). We realized early on that turning every alert into a PR just moves the problem from Sentry to GitHub, which defeats the purpose.
Is having a one-hour cron job enough to ensure the product’s health? do you receive alerts by email/slack/other for specific one or when a PR is created?
nojs 21 hours ago [-]
interesting. yeah the only reason it’s on cron is because the sentry-github integration didnt work for this (can’t remember why), and i didnt want to maintain another webhook.
the timing is not a huge issue though because the type of bugs being caught at this stage are rarely so critical they need to fixed in less time than that - and the bandwidth is limited by someone reviewing the PR anyway.
the other issue is crazy token wastage, which gets expensive. my gut instinct re triaging is that i want to do it myself in the prompt - but if it prevents noise before reaching claude it may be useful for some folks just for the token savings.
no, I don’t receive alerts because i’m looking at the PR/issues list all day anyway, it would just be noise.
Dimittri 19 hours ago [-]
totally get the 'token wastage' point—sending noise to an LLM is literally burning money.
but an other maybe bigger cost might be your time reviewing those 'bandaid fixes.' if you're merging only 30%, that means you're spending 70% of your review bandwidth on PRs that shouldn't exist right?
we deduplicate before the claude analysis with the alert context and after based on the rca so we ensure we have no noise in the PRs you have to review
why don't you trust an agent to triage alerts+issues?
nojs 15 hours ago [-]
Yeah. what I find in practice is that since the majority of these PRs require manual intervention (even if minor, like a single follow up prompt), it's not significantly better than just hammering them all out in one session myself a few times per week, and giving it my full attention for that period of time.
The exception is when a fix is a) trivial or b) affecting a real user and therefore needs to be fixed quickly, in which case the current workflow is useful. But yeah, the real step-change was having Claude hitting the Sentry APIs directly and getting the info it needs, whether async or not.
I'd also imagine that people's experiences with this vary a lot depending on the size and stage of the company - our focus is developing new features quickly rather than maintaining a 100% available critical production service, for example.
Dimittri 5 hours ago [-]
Interesting. it makes sense that it depends on the number of alerts you receive. but I’d think that if 70% of the PRs you receive are noise, an AI triager could be useful—if you give it the context it needs based on your best practices.
I’m very curious about the kinds of manual intervention you do on PRs when one is required. What does the follow-up prompt look like? Is it because the fix was bad, because the RCA itself was wrong, or because of something else?
jefflinwood 1 days ago [-]
I tried the onboarding, but I think it timed out on the Analyzing screen because it couldn't find any issues in my Sentry environment. So I couldn't get too much further.
EDIT: It did let me in, but I don't know why it took so long.
I've worked on teams where there's been one person on rotation every sprint to catch and field issues like these, so taking that job and giving it to an AI agent seems like a reasonable approach.
I think I'd be most concerned about having a separate development process outside of the main issue queue, where agents aren't necessarily integrated into the main workstream.
Dimittri 1 days ago [-]
hey thanks for the feedback! After the onboarding, we process your last issues to show you the triage and analysis, it works only if you have past alerts. do you have alerts in sentry?
we have a bot Slack feature to be inside their workflow so they don't have to go check the dashboard
achandlerwhite 23 hours ago [-]
Sounds interesting. Do you sponsor or otherwise support the open source projects you build on as mentioned in the your description?
Dimittri 23 hours ago [-]
We don't have plans to open-source the platform yet, but we prioritize transparency. For example, we display all tool calls and system prompts to help developers verify the RCA immediately.
Regarding the open-source projects—are you referring to rrweb and the Sentry SDK? We used them for the first version of our product, but we’ve since switched to connecting directly via OAuth and native integrations.
Dimittri 23 hours ago [-]
so we don't build above them as we did before
dwedge 1 days ago [-]
Oh ffs my manager is going to be talking about this in the stand up
Dimittri 23 hours ago [-]
hopefully it makes your life easier in the end!
stoneforger 16 hours ago [-]
Adding more complexity to uncover issues caused by complexity defeats the purpose but I guess people need to sell shovels.
Dimittri 5 hours ago [-]
It's hard to make it simple. the complexity is on our side, but our goal is to cut the noise from production alerts so we're removing complexity rather than adding it.
Rendered at 23:27:58 GMT+0000 (Coordinated Universal Time) with Vercel.
I've seen teams try to solve this by feeding every alert directly into an LLM (like nojs describes above), and the failure mode is predictable: the model treats each alert as an isolated incident because it has no topology awareness. It doesn't know that Service A calling Service B with 500ms latency is fine on Tuesday mornings because of the batch job, but a P1 on Wednesday afternoons.
The real question is how you keep that system map accurate as architecture evolves. In my experience, the map drifts within weeks unless it's generated from runtime data (traces, dependency graphs from actual traffic) rather than maintained manually. Static architecture docs are lies within a quarter.
Also curious about the severity scoring -- are you using user impact signals (error rates on user-facing endpoints, session replay data) or purely technical signals? The gap between "this looks bad in logs" and "users are actually affected" is where most alert fatigue comes from. A 500 error on an internal health check endpoint generates the same Sentry noise as a 500 on the checkout flow, but they're not remotely the same priority.
I think it works well because we have two deduplication steps and we group based on RCA—both before and after Claude Code analysis.
Severity also helps cut noise by highlighting which problems/solutions to review first. It's easier when the issue comes from a frontend trigger, but many alerts come from backend errors only. In those cases, Claude Code assigns severity based on its understanding of how a broken feature blocks product usage.
I have a github action that runs hourly. It pulls new issues from sentry, grabs as much json as it can from the API, and pipes it into claude. Claude is instructed to either make a PR, an issue, or add more logging data if it’s insufficient to diagnose.
I would say 30% of the PRs i can merge, the remainder the LLM has applied a bandaid fix without digging deep enough into the root cause.
Also the volume of sentry alerts is high, and the issues being fixed are often unimportant, so it tends to create a lot of “busy work”.
Is having a one-hour cron job enough to ensure the product’s health? do you receive alerts by email/slack/other for specific one or when a PR is created?
the timing is not a huge issue though because the type of bugs being caught at this stage are rarely so critical they need to fixed in less time than that - and the bandwidth is limited by someone reviewing the PR anyway.
the other issue is crazy token wastage, which gets expensive. my gut instinct re triaging is that i want to do it myself in the prompt - but if it prevents noise before reaching claude it may be useful for some folks just for the token savings.
no, I don’t receive alerts because i’m looking at the PR/issues list all day anyway, it would just be noise.
but an other maybe bigger cost might be your time reviewing those 'bandaid fixes.' if you're merging only 30%, that means you're spending 70% of your review bandwidth on PRs that shouldn't exist right?
we deduplicate before the claude analysis with the alert context and after based on the rca so we ensure we have no noise in the PRs you have to review
why don't you trust an agent to triage alerts+issues?
The exception is when a fix is a) trivial or b) affecting a real user and therefore needs to be fixed quickly, in which case the current workflow is useful. But yeah, the real step-change was having Claude hitting the Sentry APIs directly and getting the info it needs, whether async or not.
I'd also imagine that people's experiences with this vary a lot depending on the size and stage of the company - our focus is developing new features quickly rather than maintaining a 100% available critical production service, for example.
EDIT: It did let me in, but I don't know why it took so long.
I've worked on teams where there's been one person on rotation every sprint to catch and field issues like these, so taking that job and giving it to an AI agent seems like a reasonable approach.
I think I'd be most concerned about having a separate development process outside of the main issue queue, where agents aren't necessarily integrated into the main workstream.
we have a bot Slack feature to be inside their workflow so they don't have to go check the dashboard