Mediator here. This comes from a fundamental misunderstanding of what mediation is for. Mediation is about helping the disputants find a solution they can live with, but mediators never decide what that is. Mediations have a large emotional, human component. Most mediations include a step of just giving parties a chance to be heard by another human being. Mediation outcomes don't look like court outcomes for a reason.
And mediators do sometimes offer a mediator's proposal, but that's the exception, not the rule, and mediators do not decide what is fair. That's not mediation.
Real examples:
1. $50,000 contract dispute, really just wanted an apology, and dropped the dispute once they got it.
2. Civil dispute over incomplete landscaping that had been paid for. Was actually about an explanation for a romantic break-up. Ended with paying to replace the flowers.
3. So many disputes over which extended family members can have what access to kids, pets, and boats.
Those are choices the disputants made for what was an acceptable outcome, not the mediator, which is the point of mediation.
This tool sounds like it might be closer to something for Arbitration? That's a very different environment.
sanity 16 hours ago [-]
Appreciate the pushback, but I think this misreads the mechanism. Mediator.ai doesn't decide; it generates candidate agreements, scores them against both sides' stated preferences, and presents the best one. Either party can reject the proposed agreement. The parties still have to agree. That's facilitation, not arbitration.
On the hidden-interests point: the assistant actually tries to tease out unstated preferences. That's what the conversation with each party is for, and it uses several preference-elicitation strategies to get at what's underneath a stated position - but I'm sure there is plenty of opportunity for refinement here.
anonthrownaway 13 hours ago [-]
/Agree
As a long time techie I understand the desire to approach mediation as a programmatic systems problem, but as a mediator, I'd recommend OP work as a volunteer mediator long enough to realize that mediation is ~90% soft skills.
andrei_says_ 15 hours ago [-]
Do you use principles of nonviolent communication in your work? Or another framework to establish nondefensive listening?
aroido-bigcat 1 days ago [-]
Feels like the tricky part here isn’t computing a “fair” outcome, but defining what fairness even means in the first place.
Once you formalize preferences into something comparable, you’re already making a lot of assumptions about how people value outcomes.
sanity 21 hours ago [-]
Thank you for the feedback. The goal of the Nash bargaining solution is to find the agreement that maximizes the likelihood that most parties will agree based on their stated preferences.
sanity 14 hours ago [-]
most -> both
alex1sa 1 days ago [-]
[dead]
lookACamel 1 days ago [-]
Great idea though I am skeptical it will be adopted in contentious situations without some sort of stick. In amorphous situations where there is just high trust but an aversion to talking things out I could see this kind of tool being used. But in contentious or low trust situations (strangers) I suspect most people do not want fairness, they want to be ahead. A fair agreement will, paradoxically, disappoint everyone since every party feels the lack of clear advantage.
sanity 19 hours ago [-]
I think this is mostly right, but it depends a bit on how you frame "fairness".
The system isn’t trying to impose a notion of fairness from the outside. It’s trying to find agreements that both parties prefer over their BATNA (i.e. what they get if they walk away). If there’s a way for one side to come out clearly ahead given the other side’s preferences, it should find that. If not, it finds the best mutual improvement available.
On the "no stick" point, I agree this probably isn’t useful in fully adversarial situations where one side expects to win outright. Where I think it helps is when both sides suspect there’s a deal but can’t quite find it, or don’t want to go through a long negotiation process to get there.
lookACamel 17 hours ago [-]
I think the weakest part of the bakery example is the lack of specific numbers for the rent situation. Paying for someone's rent for over a year is a pretty large financial contribution and for two people not in a romantic relationship is should not be hard to do the accounting on. Like if you can fight over equity but you can't even calculate the rent you paid over the last year ... well it's no wonder you ran out of savings ...
This also points to a weakness in the product itself: it jumps to creating a solution without pushing for more info.
sanity 16 hours ago [-]
[dead]
vintermann 1 days ago [-]
This doesn't seem to have any notion of power? Coming up with a fair agreement between people who have equal power over the thing they care equally about, isn't that hard.
But when one side is indifferent to something the other side cares deeply about, yet has veto power to spoil it, a Nash agreement isn't going to be "fair" in the usual sense of the word.
sgsjchs 23 hours ago [-]
You have it backwards.
This formal game-theoretic notion of fairness acknowledges that power disparity exists and that having less power than your counterparty allows them to inflict greater disutility on you without you being able to inflict disutility on them in turn to discourage this.
On the other hand, fairness "in the usual sense", pretends power disparity doesn't exist and that, say, an armed robber is not allowed to take your stuff when you have nothing to defend yourself with. Which in reality only works as long there is a powerful third party (the state) that will inflict disutility on the robber for it.
maxaw 1 days ago [-]
In reality people never have equal power over anything (what would that look like, physically?) so something like nash bargaining is an attempt to get closer to a notion of fair given this inequality
vintermann 1 days ago [-]
I don't think the difficulty of equal power is a good excuse to pretend power doesn't exist.
One way we solve it in the real world is that the negotiators also have power - including, possibly, the power to force the party most OK with the status quo to come to the negotiating table, and reject exploitative proposals.
That isn't foolproof either, of course. But it beats rhetoric trying to convince the weaker party to submit.
maxaw 23 hours ago [-]
I didn’t say it doesn’t exist, rather that it’s already taken into account. I’m also not sure what you are proposing- if mediation is required, and someone has more power than someone else, why would they voluntarily engage with a mediator who will reduce that power? Or if they are forced to use this mediator (eg by the state) then this means they never had the power in the first place
dhruv3006 1 days ago [-]
John Nash's ideas are still relevant today - highlights how great he was - I liked how you used a genetic algorithm here!
sanity 21 hours ago [-]
John Nash was indeed a great man, thank you!
storus 13 hours ago [-]
The example on the webpage seriously disadvantages one side, preferring sweat equity and valuing the price of survival in the past rather low; I don't think I would use mediator.ai as anything but an exploratory framework and not a decision-making one.
sanity 10 hours ago [-]
[dead]
ttul 1 days ago [-]
Fabulous idea. LLM-assisted mediation is brilliant because it has the potential to bring the benefits of mediation to the masses. The addressable market is all of humanity. Even if all you did was focus this app on co-parenting arguments, you could help millions of people every day.
sanity 21 hours ago [-]
Thank you!
sarreph 20 hours ago [-]
The bakery example is interesting, because it's presented as "both sides have been working on this thing and think they should get 50%"... and then the _solution_ is "A path back to 50% for Daniel" -- who gets an objectively worse deal than his disputant.
It's definitely an interesting application of LLMs, the output text to me reads very GPT-ey, with the punctuated and concise phrasing.
zachvandorp 1 days ago [-]
Its an interesting idea. I've seen a few of these but not with ol' John's spin on it.
Do you want the first link "How it Works" to really be just the # of front page? it makes it feel like it's broken if someone clicks it. Also your blog about Nash Bargaining is almost more of a "How it Works" page than the How it Works page is.
I feel like your landing page very quickly told me what your website does which is great. If the Nash Bargaining is the "wedge" to separate you from the pack, I'd try explain how that differentiates this over the others as quickly as possible. I know that's easier said than done. Good luck!
sanity 21 hours ago [-]
Thank you!
You're right about the "How it works" page - I will remove it.
sanity 19 hours ago [-]
Actually I changed my mind, I'll just link from How it Works to the blog article for the moment.
These papers describe the LLMediator, a platform that uses LLMs to:
a) ensure a discussion maintains a positive tone by flagging and offering reformulated versions of messages that may derail the conversation
b) suggest intervention messages that the mediator can use to intervene in the discussion and guide the parties toward a positive outcome.
Overall, LLMs seem to be very good at these tasks, and even compared favourably to human-written interventions. Very excited about the potential of LLMs to lower the barrier to mediation, as it has a lot of potential to resolve disputes in a positive and collaborative manner.
sanity 19 hours ago [-]
Thank you for sharing these.
This feels complementary to my approach. Your papers seem focused on tone, interventions, and guiding the conversation. My approach is more about trying to infer each party’s preferences and then search for agreements that both would accept.
I think LLMs are strong at both layers, but they’re quite different problems. One is helping people communicate better, the other is trying to actually compute outcomes given what each side cares about.
harvey9 23 hours ago [-]
Too many chatbots maintain a relentlessly 'positive tone' anyway, and sometimes a negative situation calls for honestly negative tones.
hawest 21 hours ago [-]
Fully agree. In the LLMediator, the function is used to nudge people towards a more constructive tone by suggesting alternative formulations, but in the end the user is in control in what they want to say and how of course.
lookACamel 17 hours ago [-]
> sometimes a negative situation calls for honestly negative tones.
It's not exactly hard for humans in dispute to conjure up negative tones.
21 hours ago [-]
maxaw 1 days ago [-]
This is so cool. Even small disputes like roommate arrangements can feel very emotionally impactful at the time and it would be wonderful to have a tool for these moments
sanity 21 hours ago [-]
Thank you!
webrot 1 days ago [-]
I think this is very useful. I wonder if you have people that actually used in difficult situations? maybe family separations or challenging stuff like that, where I see a lot of potential but also resistance.
This said, I think the challenging part for the users is clearly setting the utility function. I agree LLMs can help there, but I have few concerns wrt that.
sanity 19 hours ago [-]
Thank you! It's early days yet but I've had interest from people going through a divorce with child separation questions - however I wanted to ensure it worked well on less serious problems before I risk it on something so consequential.
parkerside 22 hours ago [-]
I like the idea and signed up, but the first thing I see is a prompt to purchase credits. I don't have a use-case to try this now, so I won't be using the service again, however I couldn't find an account dashboard to delete my account or even sign out?
sanity 21 hours ago [-]
Hey, thank you for the feedback, if you click on the profile icon in the top right there is a "Sign Out" option. We don't have a delete account option yet but I will prioritize it.
mfrye0 1 days ago [-]
I would love something like this to use with my HOA. About to start mediation and the estimate for the mediator alone is ~$20k.
wferrell 1 days ago [-]
You might try Decisionlayer.ai
We built a way to make contracts enforceable and resolve disputes without the high cost of litigation. Specifically, by adding our arbitration clause to your contracts or using our "case by consent" you can get AI driven court-enforceable arbitration decisions in 7 days for a $500 flat fee - no lawyers required. This compares to the $30k or $40k you would otherwise spend on a lawyer+ JAMS/AAA arbitration fees. For your HOA, I suspect the case by consent would be the best approach - two parties come to the website, both agree to use DecisionLayer to resolve the dispute and then present the issue and each side's argument.
Thank you! You should definitely get a lawyer to review any agreement before signing if there is meaningful money at stake.
mfrye0 18 hours ago [-]
Yes. Have a lawyer and there is indeed meaningful money at stake. I'm more wishing there was a simpler way to go about it though, as it's likely going to cost 6 figures when it's all said and done.
danieldifficult 1 days ago [-]
Brilliant! Love seeing this space start to wake up.
Last year I built https://andshake.app to prevent the need for conflict resolution… by getting things clear up front.
I agree that AI has much to offer in low-stakes agreements to help people move forward in cooperation.
aspect0545 1 days ago [-]
Looks interesting. But where’s the privacy policy or at least information what happens with all the sensitive stuff you enter there. Because let’s be honest, a lot of the stuff that is awkward to talk about is somewhat private.
dennismcwong 20 hours ago [-]
Interesting idea for sure. I am just thinking, intuitively couldn't I 'game' the mediator by overstating my preference and requirements to achieve a more favorable outcome?
sanity 20 hours ago [-]
Thank you. Yes, you could inflate your BATNA, but then you risk the other side rejecting the agreement when a mutually beneficial agreement was possible if you had been honest.
This kind of property in a negotiation system, where honesty is rewarded and dishonesty can backfire, is called “incentive compatibility.” I’m not claiming my approach is formally incentive compatible, but it is directionally so.
NunoSempere 14 hours ago [-]
Perhaps look into Shapley values as well?
sanity 12 hours ago [-]
Interesting, yes. My understanding is Shapley is more about allocating a fixed surplus based on marginal contributions, whereas I’m trying to find the agreement itself given inferred preferences. But definitely related territory.
mukundesh 1 days ago [-]
How about Iran/US conflict ? or Israel/Palestine conflict ?
Is anyone working on this ? seems like a big win for AI if it can be done.
sanity 21 hours ago [-]
Believe it or not I did a lot of testing with geopolitics early on but didn't want to put it on the website so people wouldn't think I'm a megalomaniac ;)
I regenerated the Israel/Palestine agreement using my latest code although the input positions were as they were this time last year when hostages were still being held.
Seems like a very different class of problem. Many more parties and variables than the 'roommate problem'.
watwut 1 days ago [-]
Pakistan is working on the Iran/US conflict.
Zababa 1 days ago [-]
Very interesting! For limitations, I'd add stated vs revealed preference. Currently the system assumes than what people say is what they actually prefer, but that's not always the case. If that is already addressed in your tool, I think it would be nice to mention it!
sanity 20 hours ago [-]
Thank you. The purpose of having the LLM interview the user is to try to surface those unstated preferences by exploring aspects of the agreement that the user may not surface themselves.
setnone 1 days ago [-]
definitely a great use of LLMs
watwut 1 days ago [-]
Basically, the negotiating game is will break down to demanding absolute maximum and pretending you care a lot more then you care. The more demanding person gets more, less demanding person is taken for a ride.
eigenket 1 days ago [-]
I don't know anything about this specific LLM thing but if it correctly uses the Nash bargaining optimiser then that won't happen.
This thing you point out is exactly why Nash demanded invariance under affine transformations in his solution. Using completely arbitrary units if I rank everything as having importance 1 million, that's exactly the same as ranking everything as having importance 1, and also the same as ranking everything as having importance 0.
The solution is only sensitive to diffences in the unitity function, not the actual values of the function. If you want to weight something very strongly in the Nash version of the game you also have to weight other things correspondingly weakly.
sanity 21 hours ago [-]
You are correct that Nash should address this because only the relative utilities matter, not absolute.
There is the potential for parties to get better deals by overstating their BATNAs, but then they risk the other party rejecting the agreement when a mutually beneficial agreement was possible - so it's not in their interests to mislead the system.
DeathArrow 1 days ago [-]
Then the tool should be named Trump.ai, not Mediator.ai. :)
throwanem 17 hours ago [-]
You built Freenet? What about that experience encouraged you to continue building things?
sanity 15 hours ago [-]
Yes, Freenet is my project, in fact I've spent the last few years building a sequel to it[1].
I've enjoyed building things for as long as I can remember, particularly if it solves a hard problem in an interesting way - and at least has the potential to make a difference to people.
Honestly I’m on Daniel’s side - they agreed on a 50/50 split, and they’ve both been working their asses off to make the business work. It’s an arrangement that clearly both of them have been actively participating in, not trying to push back against, for a year and a half.
And the supposed insight this product offers is to… split the difference? Between Maya’s power play for 70/30, and Daniel’s insistence on the original 50/50? 60/40 is the brilliant proposal?
How could they stand to work together afterwards, knowing she thinks she deserves 70% of the profit, but was willing to ‘settle’ for 60%? Why would you want to keep working with someone who screwed you over that way? Their partnership is toast. All the mediation really does is… I don’t know, what? How is this good for Daniel? This ain’t any kind of reconciliation, surely.
Is the argument that it’d be easier for her to get a new baker, than it is for him to get a new business manager?
AnthonyR 1 days ago [-]
Yeah I also don't quite understand the example on the homepage... they agreed to 50/50 and then she wanted 70/30 so now they settle on 60/40? Like this doesn't seem like a "fair" mediation it's kind of weird (obviously oversimplifying the situation a bit but nonetheless I'm not sure real world conflicts are this simple in practice)
sanity 21 hours ago [-]
You raise a good point. The issue is presentation - leading with the 60/40 reads like midpoint arbitration, whereas the interesting part is Daniel's path back to 50/50, the management salary, the mutual waiver on the first 18 months (which is what settles his rent contribution), and the shotgun buy-sell.
I've made some changes that should help with this.
alex43578 1 days ago [-]
They wanted 50/50, but from the vignette Daniel didn’t continue to do 50% of the work.
mock-possum 1 days ago [-]
Sure, he just continued to take sole responsibility for the production of the product, quality and quantity, while also holding down an additional job, which paid the rent.
These characters have both been putting the work in.
I’d be looking for a serpent at his partner’s ear, planting poisonous suggestions that she deserves more of the company they started equally. If this were real.
lookACamel 1 days ago [-]
> While also holding down an additional job
That's the problem, the story is saying he stopped focusing full-time on the business in order to make his own ends meet. It looks like the main innovation of the mediator generated deal is that it attempts to reconcile by drafting a way back in to 50/50 if he recommits. The starting 60/40 split is not that important.
throwanem 1 days ago [-]
Her ends, too. They share an apartment, in the story.
This is certainly an example of what I would expect from a product designed to optimize a prenup. You know, they say money ruins people, but sometimes you just have to acknowledge there was nothing really ever there decent to begin with.
lookACamel 21 hours ago [-]
Yeah after re-reading the scenario it is pretty weird. The AI doesn't have enough data. There should be concrete numbers for the rent. Why wouldn't Daniel tell the LLM exactly how much it was?
throwanem 21 hours ago [-]
Well, I don't know, I'm sure. Totally unrelated, I hear a common piece of advice for the aspiring con artist is to avoid overcomplicating the legend.
gavinray 22 hours ago [-]
He paid her rent
l2s0 2 hours ago [-]
[dead]
1 days ago [-]
Daffrin 23 hours ago [-]
[dead]
openclawclub 1 days ago [-]
[dead]
kszxn 21 hours ago [-]
[dead]
tahosin 19 hours ago [-]
[dead]
Rendered at 10:36:19 GMT+0000 (Coordinated Universal Time) with Vercel.
And mediators do sometimes offer a mediator's proposal, but that's the exception, not the rule, and mediators do not decide what is fair. That's not mediation.
Real examples:
1. $50,000 contract dispute, really just wanted an apology, and dropped the dispute once they got it.
2. Civil dispute over incomplete landscaping that had been paid for. Was actually about an explanation for a romantic break-up. Ended with paying to replace the flowers.
3. So many disputes over which extended family members can have what access to kids, pets, and boats.
Those are choices the disputants made for what was an acceptable outcome, not the mediator, which is the point of mediation.
This tool sounds like it might be closer to something for Arbitration? That's a very different environment.
On the hidden-interests point: the assistant actually tries to tease out unstated preferences. That's what the conversation with each party is for, and it uses several preference-elicitation strategies to get at what's underneath a stated position - but I'm sure there is plenty of opportunity for refinement here.
As a long time techie I understand the desire to approach mediation as a programmatic systems problem, but as a mediator, I'd recommend OP work as a volunteer mediator long enough to realize that mediation is ~90% soft skills.
Once you formalize preferences into something comparable, you’re already making a lot of assumptions about how people value outcomes.
The system isn’t trying to impose a notion of fairness from the outside. It’s trying to find agreements that both parties prefer over their BATNA (i.e. what they get if they walk away). If there’s a way for one side to come out clearly ahead given the other side’s preferences, it should find that. If not, it finds the best mutual improvement available.
On the "no stick" point, I agree this probably isn’t useful in fully adversarial situations where one side expects to win outright. Where I think it helps is when both sides suspect there’s a deal but can’t quite find it, or don’t want to go through a long negotiation process to get there.
This also points to a weakness in the product itself: it jumps to creating a solution without pushing for more info.
But when one side is indifferent to something the other side cares deeply about, yet has veto power to spoil it, a Nash agreement isn't going to be "fair" in the usual sense of the word.
This formal game-theoretic notion of fairness acknowledges that power disparity exists and that having less power than your counterparty allows them to inflict greater disutility on you without you being able to inflict disutility on them in turn to discourage this.
On the other hand, fairness "in the usual sense", pretends power disparity doesn't exist and that, say, an armed robber is not allowed to take your stuff when you have nothing to defend yourself with. Which in reality only works as long there is a powerful third party (the state) that will inflict disutility on the robber for it.
One way we solve it in the real world is that the negotiators also have power - including, possibly, the power to force the party most OK with the status quo to come to the negotiating table, and reject exploitative proposals.
That isn't foolproof either, of course. But it beats rhetoric trying to convince the weaker party to submit.
It's definitely an interesting application of LLMs, the output text to me reads very GPT-ey, with the punctuated and concise phrasing.
Do you want the first link "How it Works" to really be just the # of front page? it makes it feel like it's broken if someone clicks it. Also your blog about Nash Bargaining is almost more of a "How it Works" page than the How it Works page is.
I feel like your landing page very quickly told me what your website does which is great. If the Nash Bargaining is the "wedge" to separate you from the pack, I'd try explain how that differentiates this over the others as quickly as possible. I know that's easier said than done. Good luck!
You're right about the "How it works" page - I will remove it.
I have published some research on using LLMs for mediation here: https://arxiv.org/abs/2307.16732 and https://arxiv.org/abs/2410.07053
These papers describe the LLMediator, a platform that uses LLMs to:
a) ensure a discussion maintains a positive tone by flagging and offering reformulated versions of messages that may derail the conversation
b) suggest intervention messages that the mediator can use to intervene in the discussion and guide the parties toward a positive outcome.
Overall, LLMs seem to be very good at these tasks, and even compared favourably to human-written interventions. Very excited about the potential of LLMs to lower the barrier to mediation, as it has a lot of potential to resolve disputes in a positive and collaborative manner.
This feels complementary to my approach. Your papers seem focused on tone, interventions, and guiding the conversation. My approach is more about trying to infer each party’s preferences and then search for agreements that both would accept.
I think LLMs are strong at both layers, but they’re quite different problems. One is helping people communicate better, the other is trying to actually compute outcomes given what each side cares about.
It's not exactly hard for humans in dispute to conjure up negative tones.
This said, I think the challenging part for the users is clearly setting the utility function. I agree LLMs can help there, but I have few concerns wrt that.
We built a way to make contracts enforceable and resolve disputes without the high cost of litigation. Specifically, by adding our arbitration clause to your contracts or using our "case by consent" you can get AI driven court-enforceable arbitration decisions in 7 days for a $500 flat fee - no lawyers required. This compares to the $30k or $40k you would otherwise spend on a lawyer+ JAMS/AAA arbitration fees. For your HOA, I suspect the case by consent would be the best approach - two parties come to the website, both agree to use DecisionLayer to resolve the dispute and then present the issue and each side's argument.
We have free case simulator on our site. Check it out at https://www.decisionlayer.ai/simulate
Last year I built https://andshake.app to prevent the need for conflict resolution… by getting things clear up front.
I agree that AI has much to offer in low-stakes agreements to help people move forward in cooperation.
This kind of property in a negotiation system, where honesty is rewarded and dishonesty can backfire, is called “incentive compatibility.” I’m not claiming my approach is formally incentive compatible, but it is directionally so.
Is anyone working on this ? seems like a big win for AI if it can be done.
I regenerated the Israel/Palestine agreement using my latest code although the input positions were as they were this time last year when hostages were still being held.
Interested to hear what you think: https://gist.github.com/sanity/3851e33e085ed444525edcc7b7ba2...
This thing you point out is exactly why Nash demanded invariance under affine transformations in his solution. Using completely arbitrary units if I rank everything as having importance 1 million, that's exactly the same as ranking everything as having importance 1, and also the same as ranking everything as having importance 0.
The solution is only sensitive to diffences in the unitity function, not the actual values of the function. If you want to weight something very strongly in the Nash version of the game you also have to weight other things correspondingly weakly.
There is the potential for parties to get better deals by overstating their BATNAs, but then they risk the other party rejecting the agreement when a mutually beneficial agreement was possible - so it's not in their interests to mislead the system.
I've enjoyed building things for as long as I can remember, particularly if it solves a hard problem in an interesting way - and at least has the potential to make a difference to people.
[1] https://freenet.org/about/faq/#what-is-the-projects-history
That said, given the fictional example:
Honestly I’m on Daniel’s side - they agreed on a 50/50 split, and they’ve both been working their asses off to make the business work. It’s an arrangement that clearly both of them have been actively participating in, not trying to push back against, for a year and a half.
And the supposed insight this product offers is to… split the difference? Between Maya’s power play for 70/30, and Daniel’s insistence on the original 50/50? 60/40 is the brilliant proposal?
How could they stand to work together afterwards, knowing she thinks she deserves 70% of the profit, but was willing to ‘settle’ for 60%? Why would you want to keep working with someone who screwed you over that way? Their partnership is toast. All the mediation really does is… I don’t know, what? How is this good for Daniel? This ain’t any kind of reconciliation, surely.
Is the argument that it’d be easier for her to get a new baker, than it is for him to get a new business manager?
I've made some changes that should help with this.
These characters have both been putting the work in.
I’d be looking for a serpent at his partner’s ear, planting poisonous suggestions that she deserves more of the company they started equally. If this were real.
That's the problem, the story is saying he stopped focusing full-time on the business in order to make his own ends meet. It looks like the main innovation of the mediator generated deal is that it attempts to reconcile by drafting a way back in to 50/50 if he recommits. The starting 60/40 split is not that important.
This is certainly an example of what I would expect from a product designed to optimize a prenup. You know, they say money ruins people, but sometimes you just have to acknowledge there was nothing really ever there decent to begin with.