The data retention example is the most interesting part of this. The ECL didn't just learn the rule, it learned why the rule exists - reps kept getting it wrong. That's a different thing entirely. Most knowledge systems store the conclusion and quietly lose the reasoning that produced it.
Which makes me wonder: how does the maintenance agent know when to revisit a rule like that? "Feature X ships in Q3" is easy - facts go stale and you can detect it. But "don't let reps answer data retention questions" - that rule could still look valid in the ECL long after the original reasons for it stopped applying. Does it track enough of its own provenance to catch that kind of drift?
scrumper 15 hours ago [-]
> that rule could still look valid in the ECL long after the original reasons for it stopped applying.
Ha, then it'd be doing a great job of internalizing institutional knowledge! Wait a few years and then put another one on top. I'm not sure how these things incorporate new knowledge over time, or handle re-orgs and strategy shifts, or adapt as new verticals are added. Do you need ever increasing numbers of agents to keep things in line?
As much as I'd love to have a perfect example of one of these running - it really would be very beneficial - I do have a vague feeling that these ECL concepts (and similar Enterprise-wide knowledge management AI panaceas) are the 21st century equivalents of trying to build comprehensive expert systems in Prolog.
This is cool though. Agents make it seem more plausible in a way that pure RAG systems don't. I am sure there is mileage in more focused cases (like at the author's startup, or departmentally.)
chrisweekly 21 hours ago [-]
Fantastic article. I've always felt that institutional knowledge flow is one of the most essential factors in a given company's ability to survive. In the nascent age of AI, this "Enterprise Context Layer" approach seems more likely to catch on (and become table stakes, in order to keep up) than something like https://dotwork.com which looks amazing but seems to imply vendor lock-in.
eddy162 22 hours ago [-]
Felt like this read my mind, I was shocked recently at how good Cursor (with Claude) is at answering questions given its Slack/GSuite MCP connections; and a lot faster than Glean. Also amazing to see how this can literally give better answers than some humans would.
fittingopposite 11 hours ago [-]
Any good open source solutions for this?
kingjimmy 21 hours ago [-]
"But what if I told you that all you need is 1000 lines of python + a github repo?" didnt need to read past this line LMAO. not at all enterprise.
F7F7F7 21 hours ago [-]
Don't worry. Someone will come along and run the same 1000 lines on a Docker container using ECS Fargate launched with Step Functions under the watchful eyes of Cloud Watch all glued up with Lambda and stick everything behind IAM roles and a parem store and charge 100x more...then it can fit your definition.
nullpoint420 14 hours ago [-]
Even worse. The same code but deployed as a ZIP file….
tomik99 8 hours ago [-]
[dead]
zenon_paradox 1 days ago [-]
[dead]
Rendered at 16:53:26 GMT+0000 (Coordinated Universal Time) with Vercel.
Which makes me wonder: how does the maintenance agent know when to revisit a rule like that? "Feature X ships in Q3" is easy - facts go stale and you can detect it. But "don't let reps answer data retention questions" - that rule could still look valid in the ECL long after the original reasons for it stopped applying. Does it track enough of its own provenance to catch that kind of drift?
Ha, then it'd be doing a great job of internalizing institutional knowledge! Wait a few years and then put another one on top. I'm not sure how these things incorporate new knowledge over time, or handle re-orgs and strategy shifts, or adapt as new verticals are added. Do you need ever increasing numbers of agents to keep things in line?
As much as I'd love to have a perfect example of one of these running - it really would be very beneficial - I do have a vague feeling that these ECL concepts (and similar Enterprise-wide knowledge management AI panaceas) are the 21st century equivalents of trying to build comprehensive expert systems in Prolog.
This is cool though. Agents make it seem more plausible in a way that pure RAG systems don't. I am sure there is mileage in more focused cases (like at the author's startup, or departmentally.)