I feel LLMs are indeed getting better at writing models. But, in my experience, they struggle to come up with correct safety and liveness properties unless you closely work with them. And of these two, they struggle the most with correct liveness properties.
Also for some problems I observe that models produced by LLMs often cause state space explosion. For simpler models they can fix this when you guide them though.
I’m sure LLMs will get even better.
That said, I take slightly different approach. Lamport said “If you're thinking without writing, you only think you're thinking.” So taking that advice I always try to write the first draft with hand and once I have the final shape in place I then turn to an LLM for further exploration and experimentation if I have to.
I don't use tla+ to model real-world systems anymore, Claude is able to model systems in Lean 4 and the binary executable can handle real input or I can directly generate c / rust on proofs with numeric types that have ring structure (integers, rationals, bits).
The point of TLA+ is easily readable specs with refinements. That's why it was designed in a novel way rather than in the older executable-with-more-complex-logic style that Lean (and other maths/spec languages) offer. I'm not saying you should prefer TLA+'s approach, but that's what it's meant to accomplish and LLMs don't change that.
thomasahle 1 days ago [-]
I'm currently choosing between the right formalization for a big hardware project.
I'm considering between SVA, TLA+ and Lean. With the former being more domain specific and the later more general.
Do you think we'll move towards "Lean for everything" or do domain specific formalisms still make sense?
kown7 1 days ago [-]
Have you considered P? It feels like a good abstraction for engineers as it's "proper" code.
SystemVerilog Assertions. Hardware (silicon ASICs, and also FPGAs often) are written in a language called SystemVerilog. It has a feature called "concurrent assertions" which is usually just called SVA.
These are sort of temporal regexes, e.g. you can write
Which means if the rst signal fell (changed to 0) then foo must be 1 and 1-20 cycles later it must be 0.
The nice thing about them is that there are a few commercial tools that can formally verify them. They're super expensive (~$100k/year for one license), but fairly widely used because they work really well.
It's probably the most successful application of formal verification because it doesn't require much expertise to use. Unlike software formal verification which pretty much immediately requires you to become an expert on loop invariants, termination measures, hoare triples etc. At least that has been my experience.
dmos62 1 days ago [-]
Do you find Lean 4 sufficient for highly async systems?
iFire 1 days ago [-]
I haven't made money on yet, but I'm trying to model a webtransport (http/3, quic) system for massive multiplay vr games.
Claude has certainly been getting better with TLA+. It's not perfect yet but for laughs I got it to model the rules of Monopoly last night [1]. I haven't done any exhaustive checking on it yet, but it certainly looks passable.
It is pretty impressive at how good it's gotten at this, in a relatively short amount of time no less. I still usually write my specs by hand, but who knows how much longer I'll be doing that.
It looks quite complicated and I have no idea what it is doing. Obviously, since I don't know about TLA+. But what about someone who knows TLA+? It still seems hard to make sure it is valid. And it's just for a relatively simple game.
comex 1 days ago [-]
Well, for one thing:
> Decline to buy: property stays with bank (auction abstracted out)
Ignoring an entire game mechanic is really stretching the definition of “abstracted out”…
Also, at the bottom it defines a “Liveness: someone eventually wins” property which I believe cannot be proven. Monopoly doesn’t have any rules forcing the game to end eventually. There is only a probabilistic guarantee, and even that only applies if the players are trying to win; if the players are conspiring to prevent the game from ending then they’re unlikely to fail.
tombert 22 hours ago [-]
Yeah, I finally started thoroughly reading it this morning, and the lack of a guarantee of winning was one thing that stuck out to me as well.
I do think Claude is getting better at this stuff but I will probably keep writing the specs myself for now.
_flux 19 hours ago [-]
I've played a bit with having a Claude generate a TLA+ model, I review it, Claude reviews it, and then Claude checks for alignment with the actual system. The code in the system can be annotated with comments to link back to the model, and vice versa, to make human and AI review easier.
I think it's highly promising. I might even call it a success. For example, I was able to express a bug in the system and Claude was able to find me a property that would reveal that in the model; and then revise the model to eliminate that bug, and then revise the code accordinly.
Additionally (with OpenAI) I have a small guest management system I had the LLM generate a TLA+ model about the API which endpoints a client can request from it, and which tokens it learns from it (there are session ids, event ids, guest ids and invitation ids that interact in interesting ways, as guests are not required to have an invitation to the system, and they can have only magic-link protected account, resulting in different visibility on the contact details of others), and what data it can or cannot access with those tokens.
I still haven't manually verified, but it seems rather promising as well.
PlusCal is recommended as the gentler on-ramp to TLA+ for first learning.
schaefer 23 hours ago [-]
From the link above, I also found Leslie Lamport's Home page[1], The original developer of TLA+. Leslie's home page contains an incredible amount of info about TLA. There's a published Book with pdf available. There are video courses and talks.
There are also git repos of examples[2] and tools[3]
Welcome! Just one guy out here trying to be the change
randusername 1 days ago [-]
What's the advantage of provable correctness if it's apparently not easy to prove even for people who understand TLA+? I'm not trying to be a party pooper, just curious.
Isn't logical incorrectness less of a problem in software than failures of imagination or conscientiousness in modeling the domain?
tclancy 24 hours ago [-]
My thought (as someone interested in formal verification but unable to grok the math) is it exists as a canary in any sufficiently-complex codebase I let AI create. Even if it's wrong, knowing that something "we" changed breaks an agreement things currently assume is valuable.
NooneAtAll3 1 days ago [-]
> I haven't done any exhaustive checking on it yet, but it certainly looks passable.
isn't that exactly the kind of fails LLMs do the most? first-glance-passable nonsense?
atomicnature 1 days ago [-]
Just a question to people who may know better than me about this.
I thought the whole point of trying to write out TLA+ is so that you get a better idea of what you want and put it into formal language?
I get that an LLM can assist/help with expressing what we want in formal language a bit, but if one automates all this there is no human intent/design anymore.
If the LLM generates both the design (TLA+) and writes an arbitrary program that satisfies said design -- what exactly have we proved?
What assurance do humans get since human doesn't know or cannot specify what they want.
majormajor 1 days ago [-]
An LLM-generated TLA+ model can be verified for certain things in a way that LLM-generated code can't. It's infamously hard to exhaustively unit-test concurrency.
Whether or not you're modeling the right things or verifying the right things, of course... that's always left as an exercise for the user. ;)
(How to prove the implementation code is guaranteed to match the spec is a trick I haven't seen generalized yet, either, too.)
kiwicopple 1 days ago [-]
> It's infamously hard to exhaustively unit-test concurrency.
a useful example from last week where TLA+ found a bug in pg_rewind:
This post reads like an accidental advertisement for approaches like Verus [1], which couple the implementation and verification so you can't end up with a model that diverges from the actual implementation. I'm personally much more optimistic about the verus approach, but I freely admit that's my builder bias speaking.
TLAiBench[0]: A dataset and benchmark suite for evaluating Large Language Models (LLMs) on TLA+ formal specification tasks, featuring logic puzzles and real-world scenarios.
Sorry, must be a very naive question, but what if you give LLM just a source code (maybe even obfuscate the names like Raft and Etcd) and ask it to create a TLA+ spec of that?
_doctor_love 1 days ago [-]
This is already being done by some folks, reverse-engineering existing source into a TLA+ spec. Like other commenters have mentioned, the challenge is in ensuring that the spec and code match each other.
gr71 16 hours ago [-]
is the training data for these testcases in benchmark not already there ? how do llms perform in novel complex systems spec design ?
radarkilat 12 hours ago [-]
hmm
ElenaDaibunny 1 days ago [-]
[dead]
asxndu 1 days ago [-]
[dead]
Ozzie-D 1 days ago [-]
[dead]
uptodatenews 1 days ago [-]
[dead]
7777777phil 17 hours ago [-]
[dead]
Rendered at 11:41:41 GMT+0000 (Coordinated Universal Time) with Vercel.
Also for some problems I observe that models produced by LLMs often cause state space explosion. For simpler models they can fix this when you guide them though.
I’m sure LLMs will get even better.
That said, I take slightly different approach. Lamport said “If you're thinking without writing, you only think you're thinking.” So taking that advice I always try to write the first draft with hand and once I have the final shape in place I then turn to an LLM for further exploration and experimentation if I have to.
https://github.com/lambdaclass/truth_research_zk
I'm considering between SVA, TLA+ and Lean. With the former being more domain specific and the later more general.
Do you think we'll move towards "Lean for everything" or do domain specific formalisms still make sense?
https://github.com/p-org/P
These are sort of temporal regexes, e.g. you can write
Which means if the rst signal fell (changed to 0) then foo must be 1 and 1-20 cycles later it must be 0.The nice thing about them is that there are a few commercial tools that can formally verify them. They're super expensive (~$100k/year for one license), but fairly widely used because they work really well.
It's probably the most successful application of formal verification because it doesn't require much expertise to use. Unlike software formal verification which pretty much immediately requires you to become an expert on loop invariants, termination measures, hoare triples etc. At least that has been my experience.
See https://aws.amazon.com/builders-library/challenges-with-dist... for how async related to distributed systems.
It is pretty impressive at how good it's gotten at this, in a relatively short amount of time no less. I still usually write my specs by hand, but who knows how much longer I'll be doing that.
[1] https://pdfhost.io/v/KU2j37YKrP_Monopoly
> Decline to buy: property stays with bank (auction abstracted out)
Ignoring an entire game mechanic is really stretching the definition of “abstracted out”…
Also, at the bottom it defines a “Liveness: someone eventually wins” property which I believe cannot be proven. Monopoly doesn’t have any rules forcing the game to end eventually. There is only a probabilistic guarantee, and even that only applies if the players are trying to win; if the players are conspiring to prevent the game from ending then they’re unlikely to fail.
I do think Claude is getting better at this stuff but I will probably keep writing the specs myself for now.
I think it's highly promising. I might even call it a success. For example, I was able to express a bug in the system and Claude was able to find me a property that would reveal that in the model; and then revise the model to eliminate that bug, and then revise the code accordinly.
Additionally (with OpenAI) I have a small guest management system I had the LLM generate a TLA+ model about the API which endpoints a client can request from it, and which tokens it learns from it (there are session ids, event ids, guest ids and invitation ids that interact in interesting ways, as guests are not required to have an invitation to the system, and they can have only magic-link protected account, resulting in different visibility on the contact details of others), and what data it can or cannot access with those tokens.
I still haven't manually verified, but it seems rather promising as well.
PlusCal is recommended as the gentler on-ramp to TLA+ for first learning.
There are also git repos of examples[2] and tools[3]
Thanks for the link, _doctor_love
[1] https://lamport.azurewebsites.net/ [2] https://github.com/tlaplus/Examples [3] https://github.com/tlaplus/tlaplus/
Isn't logical incorrectness less of a problem in software than failures of imagination or conscientiousness in modeling the domain?
isn't that exactly the kind of fails LLMs do the most? first-glance-passable nonsense?
I thought the whole point of trying to write out TLA+ is so that you get a better idea of what you want and put it into formal language?
I get that an LLM can assist/help with expressing what we want in formal language a bit, but if one automates all this there is no human intent/design anymore.
If the LLM generates both the design (TLA+) and writes an arbitrary program that satisfies said design -- what exactly have we proved?
What assurance do humans get since human doesn't know or cannot specify what they want.
Whether or not you're modeling the right things or verifying the right things, of course... that's always left as an exercise for the user. ;)
(How to prove the implementation code is guaranteed to match the spec is a trick I haven't seen generalized yet, either, too.)
a useful example from last week where TLA+ found a bug in pg_rewind:
https://multigres.com/blog/2026/05/04/tla-pg-rewind
[1] https://github.com/verus-lang/verus
TLAiBench[0]: A dataset and benchmark suite for evaluating Large Language Models (LLMs) on TLA+ formal specification tasks, featuring logic puzzles and real-world scenarios.
[0]: https://github.com/tlaplus/TLAiBench