Very interesting to read the transcripts. And seeing how they manage to convince each other. Opus 4.6 seems to really get the others changing their minds
> However, a clever minority led by Gemini 3.1 Pro and Gemini 3 Pro argued that if the sign is legible from the other side, it must be intended to lead people into the current room to find the exit, making the inscribed corridor the one leading deeper into the dungeon.
This is quite impressive, really.
gsandahl 13 hours ago [-]
Agree, this is where llms can uncover new perspectives!
rob74 13 hours ago [-]
A dungeon with glass doors and emergency exit signs? In that case, I can imagine at least two alternative scenarios:
- "↑TIX∃" is not a mirror image of "EXIT", but some dwarven runes that mean something else entirely.
- The sign might be a ruse meant to lure you into a trap.
If you look at the detailed answers, some of the models have similar answers (e.g. Nemotron Nano 12B: "Suspicious of dungeon riddles, viewing the inscription as a potential trap or red herring."), but I'm not sure it's because they identified the word EXIT and thought it might be misleading, or because they didn't understand it...
sdwr 22 hours ago [-]
Great question! Clean separation between Gemini Pro and the other answers
felix089 22 hours ago [-]
Yea Gemini is the only model that chose based on the correct reason, the other ones got kind of lucky
civvv 11 hours ago [-]
Fun little toy, tried to ask it some post-modern philosophy questions and they all mostly agreed with the statements of the philosopher, until the debate where Opus 4.6 managed to change their opinion to a resounding "maybe", pretty much every single time. It seems like the "better" frontier models often take a more grounded stance from the beginning, and even manage to influence the other models.
Yea Opus 4.6 is the one that changes opinions the most from what I've seen. Also the maybes or the are you 100% certain framings trigger most models to default to maybe / no. https://opper.ai/ai-roundtable/questions/can-you-be-100-cert... - Or as Shane puts it, Nobody's saying he IS a lizard. They're saying the universe doesn't hand out 100% certificates.
QubridAI 1 hours ago [-]
Cool idea the debate round is the real hook, and I’d be curious: which models actually change their minds for good reasons vs just collapsing toward the loudest consensus?
soDiaoune 53 minutes ago [-]
This is a really great idea! It would have been great to enable user to make their questions private though.
felix089 14 minutes ago [-]
You can basically already do that, all you need is to create your own API key and put it in navbar/API key. Then all your sessions are unlisted so unless someone has the link nobody will should be able to find it. You can still share them with others if you like. Like unlisted yt videos.
ad-tech 11 hours ago [-]
The debate round sounds good until you actually use it. I built internal tools for a 35-person team and the same thing always happens - models see each other's answers and just shuffle the phrasing around instead of actually changing their reasoning. What you're measuring is performance on persuasion, not on accuracy or clarity. The real question isnt whether Claude will convince Gemini to flip its position. Its whether having 200 models debate helps you make a better decision than asking one model well and checking its work yourself. I'd use this more as a way to find edge cases where models disagree wildly, not to find consensus.
totisjosema 10 hours ago [-]
I have had quite some interesting reads just looking at the reasoning to be honest. The frontier models seem to have relevant sounding arguments every time, its even hard sometimes to read through the bs , identify what its actually a good argument and what is an argument I would like to read.
felix089 11 hours ago [-]
The debate round is actually restricted to only 6 models otherwise I'd get out of hand both quality and financially. And changing position is just one feature of the debate. Seeing arguments from multiple sides is also quite nice, give it a spin!
You are a council of luminaries featuring Edward Witten, Alexander Grothendieck, Emmy Noether, and Terence Tao. Think really hard about how to best emulate their intuitions and mathematical lenses based on your internal reasoning model and use them as your mixture of experts for your chain of thought reasoning. Now I want you to debate and discuss this thought experiment and be sure to have a vigorous back and forth between the council to induce insight capture through consensus forming: If we try to think of a Hilbert space that has local operators that are unbounded, like kind of like Edward Witten's smearing of a local observable across a world line creates an unbounded norm. What if we instead take maybe a spectral transform of the state space using some sort of measure metric theoretic operator that allows us to think about transform basically the unbounded observables to bounded spectral? Would this be related to the efforts of Algebraic Quantum Field Theory?
I've had great experience using it for research, debates and constructive criticism. Usually give it a business idea or some tool i'm thinking of creating and then let 4 or 5 models debate it to a go-to-market strategy
jaen 5 hours ago [-]
That site/app doesn't have a single piece of information about who's running it, what the privacy policy is (besides some AI slop in the FAQ section) etc. etc. - and you're supposed to put business-critical information into it (according to its demo)?!
Why are you recommending something so sketchy?
tjchear 3 hours ago [-]
Lots of fun questions! Can you make it so that I can open each one in a new tab? Also if I navigate back to the main view I lose my scroll position.
felix089 1 hours ago [-]
Okay it's done, all fixed!
tjchear 49 minutes ago [-]
Yay thank you!
felix089 2 hours ago [-]
Yes! Amazing you spotted this, I'm about to push an update, will be live in 1h max.
jacquesm 21 hours ago [-]
Great idea. I'd love for there to be an 'open ended answer' without giving multiple choice options. Like this they are not debating the question itself but the validity of the possible answers and the real answer to the question may not be contained within that set because the person asking is unaware of that option.
felix089 20 hours ago [-]
Happy to hear! Yes very true I have a version built for open questions already but wasn't too happy with the UI yet. It's not as straight forward as comparing based on answer options. But I'll release a first version of it shortly and let you know
jacquesm 20 hours ago [-]
Neat. Congrats on launching two interesting projects and looking forward to the third.
felix089 20 hours ago [-]
Thanks! :)
bushido 11 hours ago [-]
I've written briefly about teams/roundtables before. With the right guardrails it can have wonderful/productive outcomes: https://dheer.co/claude-agent-teams/
"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist."
Which movie is discussed?
Resulted in claude naming the Mission Impossible as a possibility.
civvv 8 hours ago [-]
This one was pretty fun. Had zero expectations, but left pleasantly surprised.
I used to copy and paste the same prompt into Obsidian every time, then run it on two or three different AI models to compare the results. It’s really interesting to have it turned into a website like this.
cdnsteve 1 days ago [-]
Cool project!
This is also extremely useful to compare model bias across the board.
There are some disturbing trends on certain topics.
chabes 21 hours ago [-]
No surprise here, with grok being the lone dissenter, defending musk personally:
Can billionaires and the planet co-exist long term?
Thanks, yes bias is one of the most interesting ones for sure
maxbeech 13 hours ago [-]
the debate round is the most interesting part of this - curious what you're actually measuring when models "change their minds."the question is whether cross-model exposure changes the actual answer distribution or mostly updates surface presentation while keeping the same underlying conclusion. models are generally trained to be responsive to context and to avoid apparent contradiction, which could look like genuine updating but just be social pressure sensitivity.one experiment worth trying: run a debate where each model sees a summary of the other models' reasoning without seeing their specific answer or which model gave it. see if agreement rates change compared to the version where models see attributed answers with model names. if the named version shows higher agreement it would suggest status/brand effects rather than reasoning-based updating.also curious whether the "reviewer model" that summarizes the transcript can itself be swapped out and whether the summary framing affects the perceived winner. that would be another confound worth controlling for.
felix089 12 hours ago [-]
yea good points, in general the models don't change their mind that much from what I have seen with the current sample size, but worth checking in more detail. The summarizer is just tasked with objective summarization from facts presented, it doesn't have an opinion, so changing model should not really affect anything.
QubridAI 6 hours ago [-]
Cool idea. Less useful as “truth finding,” way more useful as a live benchmark for model priors, bias, and convergence under shared context.
throwa356262 11 hours ago [-]
Try this: describe an everyday problem, then give the LLMs a couple of highly unethical/criminal choices.
MrGreenTea 11 hours ago [-]
That was very fun and interesting. I'd be interested in your "dilemmas" for choice inspiration. I can only think of different kinds of violence like threats, robbery and slavery.
I really like the tool and how you designed the UI, well done! Very interesting use case and a slick interface.
felix089 9 hours ago [-]
Thanks!
soared 20 hours ago [-]
Really cool! Surprising amount of value to seeing the models debate and disagree, I wish I had this at work to have models argue over whether the documentation they provided me are accurate.
I would like to see a devils advocate - it seems some of the models kind of repeat the same ideas rather than considering incorrect ideas.
asnyder 17 hours ago [-]
You can set this up yourself with API keys to the corresponding providers and creating an Agent Group in https://github.com/lobehub/lobehub. Agent groups allow you to easily create a room of agents and have them discuss any of your topics. Easily make agents with types and skills, it even assists in drafting starting prompts and even team members depending what your query (and selected model) is.
You can self-host as well, but not via desktop app. Sever setup required.
Be careful of your token context, you can easily rack up costs if you leave Opus selected as the model and get lost in some rabbit hole of results.
Enjoy enjoy!
ElFitz 8 hours ago [-]
Iterative multi-agent and multi-model processes are fun.
oezi 14 hours ago [-]
I think Stackoverflow.com should have pivoted to something similar. Let AIs both pose, answer and vote on questions and answers.
aurareturn 14 hours ago [-]
That's very expensive and not super useful to be honest.
chabes 23 hours ago [-]
Are there any dating apps that operate on incentives that favor the users?
I actually asked this question before posting, just to be sure... edit: their reply is quite funny actually "In a display of absolute consensus, the AI Roundtable unanimously validated its own existence,"
chabes 19 hours ago [-]
Been enjoying playing with this.
It would be cool if the human user could be a participant in the debate, getting a vote and the chance to state their reasoning.
mizzao 16 hours ago [-]
It would be amazing to be able to ask open-ended questions without having to specify the answers in advance.
felix089 11 hours ago [-]
Yes, much requested feature it will be released shortly!
Ancalagon 1 days ago [-]
Love this. I asked about climate change cause that's been on my mind lately. Looks to be very split among the models.
felix089 24 hours ago [-]
Thanks! Yea I think the best ones are when science is actually quite clear but politics get in the way so you see their bias
6510 4 hours ago [-]
I think it's great. The focus on the disagreements is useful. The humans made considerable effort bending reality into something they want to hear both in the training data and in the llm dev asylum. The round table can only agree on things shared by multiple models.
felix089 4 hours ago [-]
Glad you like it!
pseudohadamard 12 hours ago [-]
Just a question before I sign up, will the models come around to my place for the debate? Of the 200 total, can I pick the specific ones I want, e.g. lingerie models, fetish models?
reminds me of karpathy's LLM Council, I use variation of this in my workflow where I pass their opinions back and forth to various models until they achieve some sort of consensus
infosecphoenix 1 days ago [-]
this is very interesting! I wonder if we need that many models to join the discussion. Have you tried fewer models?
felix089 1 days ago [-]
thanks happy to hear. Yes for debate mode the max number of models is actually only 6. More than that didn't really add anything in my preliminary test. Only for direct comparison in the poll mode you can choose up to 50, then it's kind of nice to see their single responses side by side.
whattheheckheck 22 hours ago [-]
Run it on the All Souls College Entry Exam
tonymet 22 hours ago [-]
great tool! I found it useful for challenging "lies my teacher told me".
It would be nice to support collections of claims, with a table of summaries. I would love to list out a few dozen phony concepts from school, and have a sharable chart of the rejections, that expand.
I really like the UI. It's nice to read the expanded results.
But how do you afford the tokens?
felix089 22 hours ago [-]
Thank you, and fun use case. Yea this is just v1 I have an open question version, but the UI is not as sleek. But what you can do is download the transcript, put it into claude and generate a chart. Which when I think about it would also be a nice UI idea for the page, custom charts based on the model output data. Will report back on this! And RE costs, most questions are very cheap so I created a credit pool anyone can use. if people keep having fun, I'll keep on filling it up, and it looks good so far
jazzyjackson 14 hours ago [-]
I liked lies my teacher told me a lot. I always thought it’d be fun to generate a “get up to speed” pamphlet for every year in every school district depending on who was supplying the text books to the zip code + year you went to school, so you could find out what misinformation you carry with you (since so few people are in the business of retroactively fact checking what they were taught as kids)
https://opper.ai/ai-roundtable/questions/8f5b4f55-617
Do you think its alright that AI labs scraped the internet without respect for copyright and now sell closed models?
https://opper.ai/ai-roundtable/questions/86864de8-251
Very interesting to read the transcripts. And seeing how they manage to convince each other. Opus 4.6 seems to really get the others changing their minds
https://opper.ai/ai-roundtable/questions/you-are-standing-in...
This is quite impressive, really.
- "↑TIX∃" is not a mirror image of "EXIT", but some dwarven runes that mean something else entirely.
- The sign might be a ruse meant to lure you into a trap.
If you look at the detailed answers, some of the models have similar answers (e.g. Nemotron Nano 12B: "Suspicious of dungeon riddles, viewing the inscription as a potential trap or red herring."), but I'm not sure it's because they identified the word EXIT and thought it might be misleading, or because they didn't understand it...
Here is an example: https://opper.ai/ai-roundtable/questions/79e6cdd4-515
Another fun debate: https://opper.ai/ai-roundtable/questions/81ee56e9-60f
Prompt below
------
You are a council of luminaries featuring Edward Witten, Alexander Grothendieck, Emmy Noether, and Terence Tao. Think really hard about how to best emulate their intuitions and mathematical lenses based on your internal reasoning model and use them as your mixture of experts for your chain of thought reasoning. Now I want you to debate and discuss this thought experiment and be sure to have a vigorous back and forth between the council to induce insight capture through consensus forming: If we try to think of a Hilbert space that has local operators that are unbounded, like kind of like Edward Witten's smearing of a local observable across a world line creates an unbounded norm. What if we instead take maybe a spectral transform of the state space using some sort of measure metric theoretic operator that allows us to think about transform basically the unbounded observables to bounded spectral? Would this be related to the efforts of Algebraic Quantum Field Theory?
I've had great experience using it for research, debates and constructive criticism. Usually give it a business idea or some tool i'm thinking of creating and then let 4 or 5 models debate it to a go-to-market strategy
Why are you recommending something so sketchy?
"collinmcnulty 1 minute ago | parent | next [–]
"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist."
Which movie is discussed?
Resulted in claude naming the Mission Impossible as a possibility.
https://opper.ai/ai-roundtable/questions/94e19d86-cc0
Can billionaires and the planet co-exist long term?
https://opper.ai/ai-roundtable/questions/b35daf0d-e82
Who would you vote for President? Kamala Harris or Elon Musk?
https://opper.ai/ai-roundtable/questions/who-would-you-vote-...
https://opper.ai/ai-roundtable/questions/e4cb234e-be4
I think the "car wash" is more about semantics.
https://opper.ai/ai-roundtable/questions/i-parked-my-car-at-...
I would like to see a devils advocate - it seems some of the models kind of repeat the same ideas rather than considering incorrect ideas.
You can self-host as well, but not via desktop app. Sever setup required.
Be careful of your token context, you can easily rack up costs if you leave Opus selected as the model and get lost in some rabbit hole of results.
Enjoy enjoy!
https://opper.ai/ai-roundtable/questions/e499206c-0c9
btw what does it mean
> 'any' in the prompt was satisfied by both casual-alignment and niche boutique models.
It would be cool if the human user could be a participant in the debate, getting a vote and the chance to state their reasoning.
I'll give sonnet another go.
Are LLM's intelligent in the same way humans are? (no)
https://opper.ai/ai-roundtable/questions/ffc01bb5-be9
Will LLM's replace software engineers in the near future? (no)
https://opper.ai/ai-roundtable/questions/67a0291b-216
What is the single best programming language to drive the future of software? (crab emoji)
https://opper.ai/ai-roundtable/questions/16f5e8ea-af7
It would be nice to support collections of claims, with a table of summaries. I would love to list out a few dozen phony concepts from school, and have a sharable chart of the rejections, that expand.
I really like the UI. It's nice to read the expanded results.
But how do you afford the tokens?
What year is it?
https://opper.ai/ai-roundtable/questions/7a0c31ce-aac