I'm no expert on chess engine development, but it's surprising to me that both lc0 and stockfish use SPSA for "tuning" the miscellaneous magic numbers which appear in the system rather than different black box optimization algorithms like Bayesian optimization or evolutionary algorithms. As far as I am aware both of these approaches are used more often for similar tasks in non-chess applications (ex. hyperparameter optimization in ML training) and have much more active research communities compared to SPSA.
Is there something special about these chess engines that makes SPSA more desirable for these use cases specifically? My intuition is that something like Bayesian optimization could yield stronger optimization results, and that the computational overhead of doing BO would be minimal compared to the time it takes to train and evaluate the models.
sscg13 1 days ago [-]
Engines like Stockfish might have over 100 "search parameters" that need to be tuned, to my best knowledge SPSA is preferred because the computational cost typically does not depend on the number of parameters.
Or, if attempting to use SPSA to say, perform a final post-training tune to the last layers of a neural network, this could be thousands of parameters or more.
mpolson64 1 days ago [-]
The concern about the dimensionality of the search space is real, especially once things cross over into the 100s -- BO would certainly not be useful post-training the way the blog post talks about using SPSA.
That being said, it still seems possible to be that using a different black box optimization technique for a fairly constrained set of related magic numbers (say, fewer than 50) might lead to some real performance improvements in these systems, could be worth reaching out to the lc0 or stockfish development communities.
LPisGood 24 hours ago [-]
Are there any other well known examples of SPSA being used in practice?
LPisGood 1 days ago [-]
One thing I wonder is why design of experiments (DOE) methodology is so seldom used for these things.
Statisticians and operations researchers have spent a hundred years deciding how to do as few experiments as possible to tweak parameters in the ways that give the highest impact with statistical basis that the selections are good.
In the language of information and decision trees, these experiments are trying to in some sense “branch” on the entropy minimizing variables.
agalunar 1 days ago [-]
SPRT is used religiously in engine development today. There is enormous incentive to test efficiently.
DOE is still very useful in many contexts, but when it's possible do use a sequential design these adaptive techniques really start to pull away in terms of optimization quality.
There's simply a lot of sample efficiency to gain by adapting the experiment to incoming data in a regime where one can repeatedly design n candidates, observe their effects, and repeat m times compared to a setting where one must design a fixed experiment with n*m samples.
incognito124 1 days ago [-]
Please be careful when visiting the homepage
NooneAtAll3 1 days ago [-]
as always, genius and insanity are only 1 step apart
hmmmmmmmmmmmmmm 1 days ago [-]
Load of bullshit. Let's not romanticise mental illness.
You now have a generation of people who think it is cool to be mentally ill.
tqpcharlie 1 days ago [-]
or a generation of people who feel comfortable talking candidly about their mental ilness(es) and feel safe seeking help among others like them??
hmmmmmmmmmmmmmm 1 days ago [-]
You mean the people marketing their mental health issues online?
squeaky-clean 1 days ago [-]
Where can I buy some of what they're marketing?
Retric 1 days ago [-]
Mental illness has always been common and often been cool in one form or another.
Serial killers get fan mail, that’s true now and it was true 100 years ago.
tqpcharlie 21 hours ago [-]
100 years ago, people with mental illness were assumed to be "possessed by demons" and were either institutionalized against their will, lobotomized, or both. Or just left to die.
Retric 20 hours ago [-]
People are still institutionalized against their will others are left to die on the streets, but not all mental illness is particularly severe.
Psychoanalysis while mostly quackery is ~135 years old providing an example where talking was considered a viable therapy not just locking people up or tossing out lobotomies left and right to anyone slightly abnormal.
So sure, 100 years ago there was quackery just as today, but “possessed by demons” wasn’t considered mainstream back then any more than it is today.
hmmmmmmmmmmmmmm 1 days ago [-]
I think a lot of people still grow out of that phase. Like wanting to be like the Joker or taking a 'am I a sociopath' test online and finding your new edgelord persona only to find it deeply cringeworthy later.
simlevesque 1 days ago [-]
I mean, some of the things on that homepage are truly the works of a genius IMO
NooneAtAll3 1 days ago [-]
no, homepage is insanity
blog post is good
pavel_lishin 1 days ago [-]
This really reminds me of the web as I remember it from the mid-to-late 90's; I feel like I'm just a click away from the old deoxy.org, if anyone remembers that. (Don't go there now; the domain appears to have been long-ago hijacked.)
fsiefken 1 days ago [-]
I loved the deoxy site, it was one of my favorites :-)
Next to the site and writings of the esoteric Brother Blue, who was he?
It eventually caused me to go in a reality tunnel for a few years. It was a fascinating and puzzling experience similar as to what was described in Cosmic Trigger III by R.A. Wilson.
xdennis 23 hours ago [-]
I guess we visited different parts. I don't remember any videos of people burning their dicks off on the early internet.
incognito124 1 days ago [-]
or kittens on encyclopediabrittanica
andix 1 days ago [-]
If you paid for MS FrontPage, you better get some value out of it!
1 days ago [-]
WesolyKubeczek 1 days ago [-]
It gave me serious vibes of the old internet homepages of highly eccentric people that became a part of the internet folklore, whether in a good way or a bad way.
The video is probably the least bizarre thing there, if that's what you are warning about.
uncivilized 1 days ago [-]
What were you browsing where someone cutting off their own testicles is not as bizarre as other things? I didn't watch the video but atleast there was a warning.
Feds this guy right here ^^
pavel_lishin 1 days ago [-]
> What were you browsing where someone cutting off their own testicles is not as bizarre as other things?
One of my formative early internet experiences was loading up a video of a man being beheaded with a knife.
Luckily, I realized what was about to happen, and didn't subject myself to the whole thing.
mghackerlady 1 days ago [-]
As a transgender woman, that isn't something I'd expect to see but am not surprised to see on a site called girl.surgery. dead doves and all that
ASalazarMX 1 days ago [-]
Looks at the chain of comments, then at the URL domain
Thanks for the warnings, kind strangers.
WesolyKubeczek 1 days ago [-]
There's some distance between setting pubes on fire and cutting testicles off, dare I say.
Although, setting any kind of hair on fire in public should be punishable, primarily because of stench of the burnt hairs.
t1234s 1 days ago [-]
The homepage for this site is defiantly NSFW.
voxl 1 days ago [-]
You probably meant definitely, but defiantly amusingly works too
thinkingtoilet 1 days ago [-]
"definitely" or "defiantly"?
The idea of something being "defiantly" NSFW gave me a chuckle.
t1234s 8 hours ago [-]
Autocorrect for the win
RivieraKid 1 days ago [-]
AFAIK chess is has been "solved" for a few years in the sense that Stockfish running on modern laptop with 1 minute per move is unbeatable from the starting position.
helloplanets 1 days ago [-]
This is not true. Stockfish is not unbeatable by another engine, or another copy of Stockfish.
Chess engines have been impossible for humans to beat for well over a decade.
But a position in chess being solved is a specific thing, which is still very far from having happened for the starting position. Chess has been solved up to 7 pieces. Solving basically amounts to some absolutely massive tables that have every variation accounted for, so that you know whether a given position will end in a draw, black win or white win. (https://syzygy-tables.info)
LeifCarrotson 1 days ago [-]
The parent is using a different definition, so they put "solved" in quotes. What word would you suggest to describe the situation where the starting position with 32 pieces always ends in either a draw or win for white, regardless of the compute and creativity available to black?
I haven't verified OP's claim attributed to 'someone on the Stockfish discord', but if true, that's fascinating. There would be nothing left for the engine developers to do but improve efficiency and perhaps increase the win-to-draw ratio.
helloplanets 1 days ago [-]
Yea that's true, it's a pretty overloaded word. From what I remember though, even the top players thought that there wasn't anywhere left to go with chess engines, before Alpha Zero basically ripped the roof off with a completely different play style back in 2017, beating Stockfish.
And the play style of Alpha Zero wasn't different in a way that needs a super trained chess intuition to see, it's outrageously different if you take a look at the games.
I guess my point is, that even if the current situation is basically a 'deadlock', it's been proven that it's not some sort of eternal knowledge of the game as of yet. There's still the possiblity that a new type of approach could blow the current top engines out of the water, with a completely different take on the game.
sscg13 1 days ago [-]
However, it is true that Elo gain on "balanced books" has stalled somewhat since Stockfish 16 in 2023, which is also reflected on the CCRL rating lists.
IMO AlphaZero was partially a result of the fact that using more compute also works. Stockfish 10 running on 4x as many CPUs would beat Stockfish 8 by a larger margin than AlphaZero did. To this day, nobody has determined what a "fair" GPU to CPU comparison is.
gowld 1 days ago [-]
It's a strange definition of "solved".
War was "solved" when someone made a weapon capable of killing all the enemy soldiers, until someone made a weapon capable of disabling the first weapon.
RivieraKid 1 days ago [-]
Do you have a source? I remember asking on the Stockfish Discord and being told that Stockfish on a modern laptop with 1 min per move will never lose against Stockfish with 1000 min per move from the starting position.
But I'm not sure whether that guy was guessing or confident about that claim.
helloplanets 1 days ago [-]
There's the TCEC [0] which is a big thing in some circles. Stockfish does lose every now and then against top engines. [1] Usually it's two different engines playing against one another, though. Like Leela Chess Zero [2] vs. Stockfish.
In that hypothetical of running 2 instances of Stockfish against one another on a modern laptop, with the key difference being minutes of compute time, it'd probably be very close to 100% of draws. Depending on how many games you run. So, if you run a million games, there's probably some outliers. If you run a hundred, maybe not.
When it comes to actually solved positions, the 7-piece tables take around 1TB of RAM to even run. These tablebases are used by Stockfish when you actually want to run it at peak strength. [3]
I remember hearing that starting position is so draw-ish that it's not practical anymore
LogicalRisk 1 days ago [-]
TCEC does force different openings yes. Engines play both sides.
LogicalRisk 1 days ago [-]
Here's a game from a month ago where Stockfish loses to Lc0, played during the TCEC Cup. https://lichess.org/S9AwOvWn
Chess is a 2 player game of perfect, finite information, so by Zermelo's theorem either one side always wins with optimal play or it's a draw with optimal play. The argument from the Discord person simply says that Stockfish computationally can't come up with a way to beat itself. Whether this is true (and it really sounds like a question about depth in search) is separate from whether the game itself is solved, and it very much is not.
Solving chess would be a table that simply lists out the optimal strategy at every node in the game tree. Since this is computationally infeasible, we will certainly never solve chess absent some as yet unknown advance in computation.
RivieraKid 1 days ago [-]
What I meant by "solved" is "never loses from the starting position against Stockfish that has infinite time per move".
In the TCEC game, I see "2. f4?!", so I'm guessing Stockfish was forced to played some specific opening, i.e. it was forced to make a mistake.
gowld 1 days ago [-]
That means that Stockfish's parameters are already optimized as far as practically possible for Rapid chess and Slow chess, not that chess itself is solved, or even that Stockfish is fully optimized for Blitz and Bullet.
sscg13 1 days ago [-]
Surely it is apparent to you that the first few moves are not independently chosen by the engine, but rather intentionally chosen by the TCEC bookmakers to create a position on the edge between a draw and a decisive result.
Yes, engines would almost certainly never play 2. f4. That's a different question than whether chess is solved, for which the question of interest would be "given optimal play after 1. e4 e5 2. f4 is the result a win for one side or a draw?"
It's also almost certainly the case, in that I don't know why you would do it, that Stockfish given the black pieces and extensive pondering would be meaningfully better than Stockfish with a time capped move order. Most games are going to be draws so practically it would take awhile to determine this.
I'm of the view that the actual answer for chess is "It's a draw with optimal play."
MengerSponge 1 days ago [-]
That just means that Stockfish doesn't get stronger with more than 1 minute per move on a modern computer. It doesn't say anything about other engines.
RivieraKid 1 days ago [-]
Stockfish with 1000 minutes per move is an approximation of a perfect chess player. So if Stockfish with 1 minute per move will never lose against a perfect player, it is unbeatable by any chess engine.
MengerSponge 21 hours ago [-]
> a perfect chess player
How could we possibly know this?
> it is unbeatable by any chess engine
So its engine is finished? There's no further development? No new algorithms?
RivieraKid 15 hours ago [-]
> How could we possibly know this?
Isn't it obvious that increasing time per move will make the engine better and at some point perfect?
> So its engine is finished? There's no further development? No new algorithms?
No.
sscg13 1 days ago [-]
Hypothetically, what reward would be worth the cost for you to attempt to beat Stockfish 18, 100 million nodes/move, from the starting position?
bee_rider 1 days ago [-]
“Solved” is a term of art. Defining it in some other way is not really wrong (since it is a definition) but it seems… unnecessary.
sscg13 1 days ago [-]
You can run Stockfish single threaded in a deterministic manner by specifying nodes searched instead of time, so in principle it is possible to set some kind of bounty for beating Stockfish X at Y nodes per move from the start position, but I haven't seen anyone willing to actually do so.
altruios 1 days ago [-]
Even by a stockfish running on a modern laptop with 2 minutes per move (provided they are going second)?!
RivieraKid 1 days ago [-]
Yes, that's what "unbeatable from the starting position" means.
altruios 1 days ago [-]
Can you like to the proof? It seems so implausible that chess has been 'solved'... How do we know an even higher time searching will not work?
> Here's a video of me burning off my pubic hair in the alley.
TZubiri 1 days ago [-]
I know a fair deal about the subject of chess AI, but when I was reading this and I didn't understand. I was polarized, was I reading a mastermind that was way above my level? Or someone way too confident that learned enough buzzwords through an LLM to briefly delude someone else other than themselves?
A quick visit at the homepage suggests that it's probably the latter. I don't want to be rude, not posting out of malice, but if someone else was reading this and was trying to parse it, I think it might be helpful to compare notes and evaluate whether it's better to discard the article altogether.
Paracompact 1 days ago [-]
Curious, what has you believe that? As someone who doesn't know much about chess AI, I was mostly able to follow along, and figured there were simply some prereqs the author wasn't explaining (e.g. distillation, test-time search, RLVR). If the article is deeply confused in some way I would indeed like to calibrate my BS detectors.
TZubiri 1 days ago [-]
Just to confirm, did you read Cosmo's article (cosmo.tardis.uk black background), or the girl.surgery (white background) article?
ML isn't my strong suit so I wouldn't be able to explain how, but Cosmo's article is almost entirely a refutation of the points made by the root article. No doubt he is very friendly, as someone would be to anyone interested in their field.
What I can speak about is the general construction of sentences, they read (in the most charitable of interpretations) like text messages:
"Good model vs bad model is ~200 elo, but search is ~1200 elo, so even a bad model + search is essentially an oracle to a good model without, and you can distill from bad model + search → good model."
I take it that by "is ~X elo" they mean that implementing that strategy results in a gain of 200 ELO? Which would still be undefined, as 1000 to 1200 is not the same as 2800 to 3000, and improvements are of course not cumulative. I get that this reads more like internal notes, but it was published, so there was some expectation that it would be understood by someone else.
For a lot more reasons, the writing reminds me of notes written by me or by loved ones under influence of drugs. My estimation is that the article was written by a mind that used to be brilliant but is now just echoing that brilliance while, trying to keep their higher order cognitive functions while struggling to maintain the baseline of basic language use. I hope it is reversible and if per is reading this and my estimation is correct, that they perturb the weights in favour of quitting drugs and see if they win more or not.
why_only_15 24 hours ago [-]
I think Cosmo's refutations were mostly not very useful and based on misunderstandings of what I was trying to say. This is fine and we discussed it prior to their article being published.
The point I was trying to make with "RL is only necessary once" is that you can embark on a single self-play loop getting better and better, and this will get you to something close to the frontier. Once you're at the frontier, the frontier doesn't move very much, so you have quite a while (decade?) where it's totally fine to distill from the RL games.
On correction histories -- imo I correctly described what they do. Cosmo was annoyed by the word "adapt" but what I described was the adaptation.
On SPSA -- you don't have a gradient! you don't do backprop! this is what i was trying to get at.
an increase of X ELO points doesn't have a significant meaning, as an increase of points from 1000 to 1200, would be very different from an increase of points from 1500 to 1700, and very different to an increase of points from 2800 to 3000.
There's a million ways a player or an engine can go from 1000 to 1200. But whatever tactic or change would make a player go from 1000 to 1200, could easily cause a 2800 player/engine go to 2750 or 2400 or... 1200.
wizzwizz4 1 days ago [-]
This doesn't handle rock-paper-scissors-type strategy matchups, but at that level of detail, 1000 to 1200 is not the same as (a different) 1000 to 1200.
randomNumber7 1 days ago [-]
I personally don't believe the argument that search with a bad model helps so much. In e.g. an open position with lots of possibilities you would need an insane amount of calculations to beat a positional/strategic player with a bad engine.
TZubiri 24 hours ago [-]
It all depends on the nuances of the search and the board evaluation. I've done a traditional search based engine, you can try playing against it. It plays out as you mention, you beat it positionally. But tbh if you play against a search only engine without heuristics but with a more optimized algorithm, you may still fail to penetrate it unless you know anti engine strategy.
lichess.com/@/TotomiBot
It's currently uses a 3ply exhaustive search, with the exception that takes don't count toward the ply limit, so it actually evaluates all branches up until the third non taking move.
On the evaluation, it uses two separate scoring values, one for material, using Lasker style piece values, and another for tiebreaking, which would be the positional score.
Positional score is mostly determined by a bitboard with for each piece type, with positive and negative biases towards specific squares, for exmaple the king bishop pawn is heavily incentivized to stay put. The boards are perspective based, so it works the same if you are black or white without needing symmetry (which would make promotion strategies hard).
There's also a couple of heuristics for king safety.
The complexity has come to a point where it's hard to predict what will improve or make it worse by just fiddling with the heuristics. But there's probably a lot of room for improvement in terms of reducing and optimizing compute time.
AI is python based but calculation (search) and evals are offloaded to a c lib for efficiency.
ELO is around 1400, and you can pretty much only beat it positionally (or with a very aggressive early sacrifice. As there's almost no hanging of pieces.
I had only read girl.surgery. I have now read Cosmo's article.
> ML isn't my strong suit so I wouldn't be able to explain how, but Cosmo's article is almost entirely a refutation of the points made by the root article. No doubt he is very friendly, as someone would be to anyone interested in their field.
ML is familiar to me but far from my specialty. It was very difficult for me to understand the points from Cosmo's article, even if it seems more technically correct and less notes-y. Actually, it was likely because it was aiming for high technical correctness that some sentences are impossible for me to digest. (AlphaZero is a strange inversion of RL, where all of the “learning how to map situations to actions so as to maximize a numerical reward signal” is done online, by a GOFAI algorithm, and absolutely no reinforcement learning makes it into the actual gradient used to train the network!)
I think you may have misunderstood the Now we get to the scathing criticism line as being literal rather than ironic (or literal disguised as irony), because most of Cosmo's points are clarifications and distinctions only understandable or valuable to chess engine/ML experts. Many of Cosmo's points are agreement or unrelated; many others are self-professed nitpicks; and among the rest, I think Cosmo is being overly harsh. For example, the discussion on "no gradient" is an agreement in disguise, because what girl.surgery means to say (and what I understood the first read around) is simply that SPSA is like gradient descent, but without access to analytical derivations of derivatives. As another example, the discussion on "self-play was only necessary one time" leads to Cosmo only disagreeing with the language, not the description of the process; "bad model + search → good model" per girl.surgery is mirrored by Cosmo saying "To surpass that ceiling, you must search-amplify the new network, generating better data than the old oracle could, and distill again — and this is precisely the self-play loop," and if I had to guess girl.surgery means by "self play" bootstrapping from absolutely nothing rather than from another highly capable model.
> I take it that by "is ~X elo" they mean that implementing that strategy results in a gain of 200 ELO? Which would still be undefined, as 1000 to 1200 is not the same as 2800 to 3000, and improvements are of course not cumulative.
I understood +X elo over the next-best model, when the context is that of top-shelf models rather than near amateur human play. This usage of "elo gains" in generalized context is even used by Tilps and Crem in Cosmo's quote. It's just a ballpark of the magnitude of strength difference we're talking about, one which is actually not as contextually sensitive as you might think, because of what yorwba notes about the very definition of elo.
> For a lot more reasons, the writing reminds me of notes written by me or by loved ones under influence of drugs. My estimation is that the article was written by a mind that used to be brilliant but is now just echoing that brilliance while, trying to keep their higher order cognitive functions while struggling to maintain the baseline of basic language use. I hope it is reversible and if per is reading this and my estimation is correct, that they perturb the weights in favour of quitting drugs and see if they win more or not.
Very possibly. But I might offer an alternative, more charitable explanation: profound neurodivergence and/or mental illness. I personally know at least one troubled genius who writes like this, if not worse, but who is more than capable of very serious intellectual projects and research. The nature of autism tends to make it harder to write for a general audience without coming off as bizarre, and in my experience they are better at interactive, 1-on-1 discussions where you can ask questions to course-correct them away from burrowing too deep into their own head.
why_only_15 24 hours ago [-]
The chess people seemed to think my article was reasonably accurate. But I'm not really sure.
potsandpans 1 days ago [-]
This comment is another example of an "llm psychosis" that is currently occuring in common discourse.
The mass delusion of, "I don't understand what I'm reading, therefore it must be produced by an llm."
I think it's a pretty serious problem. Not that llm text exists on the internet, but that reasonable people are reflexively closed off to creativity because the mere existence of the possibility that something is created by an llm is in their minds grounds for disqualification.
TZubiri 1 days ago [-]
Nono, the claim is not that it is produced by an llm, rather that author researches the subject with llms and generally is a high frequency user.
A common property of llm psychosis is the development of an internal vocabulary that the llm learns, often reusing words but adopting specific meanings, for some reason quantum and quantic are very popular for this.
wizzwizz4 23 hours ago [-]
I didn't see any of that in either article.
oldpersonintx 1 days ago [-]
[dead]
Rendered at 23:27:40 GMT+0000 (Coordinated Universal Time) with Vercel.
Is there something special about these chess engines that makes SPSA more desirable for these use cases specifically? My intuition is that something like Bayesian optimization could yield stronger optimization results, and that the computational overhead of doing BO would be minimal compared to the time it takes to train and evaluate the models.
Or, if attempting to use SPSA to say, perform a final post-training tune to the last layers of a neural network, this could be thousands of parameters or more.
That being said, it still seems possible to be that using a different black box optimization technique for a fairly constrained set of related magic numbers (say, fewer than 50) might lead to some real performance improvements in these systems, could be worth reaching out to the lc0 or stockfish development communities.
Statisticians and operations researchers have spent a hundred years deciding how to do as few experiments as possible to tweak parameters in the ways that give the highest impact with statistical basis that the selections are good.
In the language of information and decision trees, these experiments are trying to in some sense “branch” on the entropy minimizing variables.
https://github.com/official-stockfish/fishtest/wiki/Fishtest...
There's simply a lot of sample efficiency to gain by adapting the experiment to incoming data in a regime where one can repeatedly design n candidates, observe their effects, and repeat m times compared to a setting where one must design a fixed experiment with n*m samples.
You now have a generation of people who think it is cool to be mentally ill.
Serial killers get fan mail, that’s true now and it was true 100 years ago.
Psychoanalysis while mostly quackery is ~135 years old providing an example where talking was considered a viable therapy not just locking people up or tossing out lobotomies left and right to anyone slightly abnormal.
So sure, 100 years ago there was quackery just as today, but “possessed by demons” wasn’t considered mainstream back then any more than it is today.
blog post is good
The video is probably the least bizarre thing there, if that's what you are warning about.
Feds this guy right here ^^
One of my formative early internet experiences was loading up a video of a man being beheaded with a knife.
Luckily, I realized what was about to happen, and didn't subject myself to the whole thing.
Thanks for the warnings, kind strangers.
Although, setting any kind of hair on fire in public should be punishable, primarily because of stench of the burnt hairs.
The idea of something being "defiantly" NSFW gave me a chuckle.
Chess engines have been impossible for humans to beat for well over a decade.
But a position in chess being solved is a specific thing, which is still very far from having happened for the starting position. Chess has been solved up to 7 pieces. Solving basically amounts to some absolutely massive tables that have every variation accounted for, so that you know whether a given position will end in a draw, black win or white win. (https://syzygy-tables.info)
I haven't verified OP's claim attributed to 'someone on the Stockfish discord', but if true, that's fascinating. There would be nothing left for the engine developers to do but improve efficiency and perhaps increase the win-to-draw ratio.
And the play style of Alpha Zero wasn't different in a way that needs a super trained chess intuition to see, it's outrageously different if you take a look at the games.
I guess my point is, that even if the current situation is basically a 'deadlock', it's been proven that it's not some sort of eternal knowledge of the game as of yet. There's still the possiblity that a new type of approach could blow the current top engines out of the water, with a completely different take on the game.
IMO AlphaZero was partially a result of the fact that using more compute also works. Stockfish 10 running on 4x as many CPUs would beat Stockfish 8 by a larger margin than AlphaZero did. To this day, nobody has determined what a "fair" GPU to CPU comparison is.
War was "solved" when someone made a weapon capable of killing all the enemy soldiers, until someone made a weapon capable of disabling the first weapon.
But I'm not sure whether that guy was guessing or confident about that claim.
In that hypothetical of running 2 instances of Stockfish against one another on a modern laptop, with the key difference being minutes of compute time, it'd probably be very close to 100% of draws. Depending on how many games you run. So, if you run a million games, there's probably some outliers. If you run a hundred, maybe not.
When it comes to actually solved positions, the 7-piece tables take around 1TB of RAM to even run. These tablebases are used by Stockfish when you actually want to run it at peak strength. [3]
[0]: https://tcec-chess.com [1]: https://lichess.org/broadcast/tcec-s28-leagues--superfinal/m... [2]: https://lczero.org [3]: https://github.com/syzygy1/tb
I remember hearing that starting position is so draw-ish that it's not practical anymore
Chess is a 2 player game of perfect, finite information, so by Zermelo's theorem either one side always wins with optimal play or it's a draw with optimal play. The argument from the Discord person simply says that Stockfish computationally can't come up with a way to beat itself. Whether this is true (and it really sounds like a question about depth in search) is separate from whether the game itself is solved, and it very much is not.
Solving chess would be a table that simply lists out the optimal strategy at every node in the game tree. Since this is computationally infeasible, we will certainly never solve chess absent some as yet unknown advance in computation.
In the TCEC game, I see "2. f4?!", so I'm guessing Stockfish was forced to played some specific opening, i.e. it was forced to make a mistake.
For what it's worth, Stockfish wins the rematch also. https://tcec-chess.com/#game=13&round=fl&season=cup16
It's also almost certainly the case, in that I don't know why you would do it, that Stockfish given the black pieces and extensive pondering would be meaningfully better than Stockfish with a time capped move order. Most games are going to be draws so practically it would take awhile to determine this.
I'm of the view that the actual answer for chess is "It's a draw with optimal play."
How could we possibly know this?
> it is unbeatable by any chess engine
So its engine is finished? There's no further development? No new algorithms?
Isn't it obvious that increasing time per move will make the engine better and at some point perfect?
> So its engine is finished? There's no further development? No new algorithms?
No.
Response from the author of Viridithas, there is a link to this engine in her webpage.
> I use she/her pronouns
See the main page https://girl.surgery/
And there's:
> Here's a video of me burning off my pubic hair in the alley.
A quick visit at the homepage suggests that it's probably the latter. I don't want to be rude, not posting out of malice, but if someone else was reading this and was trying to parse it, I think it might be helpful to compare notes and evaluate whether it's better to discard the article altogether.
ML isn't my strong suit so I wouldn't be able to explain how, but Cosmo's article is almost entirely a refutation of the points made by the root article. No doubt he is very friendly, as someone would be to anyone interested in their field.
What I can speak about is the general construction of sentences, they read (in the most charitable of interpretations) like text messages:
"Good model vs bad model is ~200 elo, but search is ~1200 elo, so even a bad model + search is essentially an oracle to a good model without, and you can distill from bad model + search → good model."
I take it that by "is ~X elo" they mean that implementing that strategy results in a gain of 200 ELO? Which would still be undefined, as 1000 to 1200 is not the same as 2800 to 3000, and improvements are of course not cumulative. I get that this reads more like internal notes, but it was published, so there was some expectation that it would be understood by someone else.
For a lot more reasons, the writing reminds me of notes written by me or by loved ones under influence of drugs. My estimation is that the article was written by a mind that used to be brilliant but is now just echoing that brilliance while, trying to keep their higher order cognitive functions while struggling to maintain the baseline of basic language use. I hope it is reversible and if per is reading this and my estimation is correct, that they perturb the weights in favour of quitting drugs and see if they win more or not.
The point I was trying to make with "RL is only necessary once" is that you can embark on a single self-play loop getting better and better, and this will get you to something close to the frontier. Once you're at the frontier, the frontier doesn't move very much, so you have quite a while (decade?) where it's totally fine to distill from the RL games.
On correction histories -- imo I correctly described what they do. Cosmo was annoyed by the word "adapt" but what I described was the adaptation.
On SPSA -- you don't have a gradient! you don't do backprop! this is what i was trying to get at.
Elo is defined such that the expected win-rate of a player should only depend on the difference in Elo rating to their opponent. https://en.wikipedia.org/wiki/Elo_rating_system#Mathematical...
an increase of X ELO points doesn't have a significant meaning, as an increase of points from 1000 to 1200, would be very different from an increase of points from 1500 to 1700, and very different to an increase of points from 2800 to 3000.
There's a million ways a player or an engine can go from 1000 to 1200. But whatever tactic or change would make a player go from 1000 to 1200, could easily cause a 2800 player/engine go to 2750 or 2400 or... 1200.
lichess.com/@/TotomiBot
It's currently uses a 3ply exhaustive search, with the exception that takes don't count toward the ply limit, so it actually evaluates all branches up until the third non taking move.
On the evaluation, it uses two separate scoring values, one for material, using Lasker style piece values, and another for tiebreaking, which would be the positional score.
Positional score is mostly determined by a bitboard with for each piece type, with positive and negative biases towards specific squares, for exmaple the king bishop pawn is heavily incentivized to stay put. The boards are perspective based, so it works the same if you are black or white without needing symmetry (which would make promotion strategies hard).
There's also a couple of heuristics for king safety.
The complexity has come to a point where it's hard to predict what will improve or make it worse by just fiddling with the heuristics. But there's probably a lot of room for improvement in terms of reducing and optimizing compute time.
AI is python based but calculation (search) and evals are offloaded to a c lib for efficiency.
ELO is around 1400, and you can pretty much only beat it positionally (or with a very aggressive early sacrifice. As there's almost no hanging of pieces.
https://github.com/TZubiri/TotomiChessAI
> ML isn't my strong suit so I wouldn't be able to explain how, but Cosmo's article is almost entirely a refutation of the points made by the root article. No doubt he is very friendly, as someone would be to anyone interested in their field.
ML is familiar to me but far from my specialty. It was very difficult for me to understand the points from Cosmo's article, even if it seems more technically correct and less notes-y. Actually, it was likely because it was aiming for high technical correctness that some sentences are impossible for me to digest. (AlphaZero is a strange inversion of RL, where all of the “learning how to map situations to actions so as to maximize a numerical reward signal” is done online, by a GOFAI algorithm, and absolutely no reinforcement learning makes it into the actual gradient used to train the network!)
I think you may have misunderstood the Now we get to the scathing criticism line as being literal rather than ironic (or literal disguised as irony), because most of Cosmo's points are clarifications and distinctions only understandable or valuable to chess engine/ML experts. Many of Cosmo's points are agreement or unrelated; many others are self-professed nitpicks; and among the rest, I think Cosmo is being overly harsh. For example, the discussion on "no gradient" is an agreement in disguise, because what girl.surgery means to say (and what I understood the first read around) is simply that SPSA is like gradient descent, but without access to analytical derivations of derivatives. As another example, the discussion on "self-play was only necessary one time" leads to Cosmo only disagreeing with the language, not the description of the process; "bad model + search → good model" per girl.surgery is mirrored by Cosmo saying "To surpass that ceiling, you must search-amplify the new network, generating better data than the old oracle could, and distill again — and this is precisely the self-play loop," and if I had to guess girl.surgery means by "self play" bootstrapping from absolutely nothing rather than from another highly capable model.
> I take it that by "is ~X elo" they mean that implementing that strategy results in a gain of 200 ELO? Which would still be undefined, as 1000 to 1200 is not the same as 2800 to 3000, and improvements are of course not cumulative.
I understood +X elo over the next-best model, when the context is that of top-shelf models rather than near amateur human play. This usage of "elo gains" in generalized context is even used by Tilps and Crem in Cosmo's quote. It's just a ballpark of the magnitude of strength difference we're talking about, one which is actually not as contextually sensitive as you might think, because of what yorwba notes about the very definition of elo.
> For a lot more reasons, the writing reminds me of notes written by me or by loved ones under influence of drugs. My estimation is that the article was written by a mind that used to be brilliant but is now just echoing that brilliance while, trying to keep their higher order cognitive functions while struggling to maintain the baseline of basic language use. I hope it is reversible and if per is reading this and my estimation is correct, that they perturb the weights in favour of quitting drugs and see if they win more or not.
Very possibly. But I might offer an alternative, more charitable explanation: profound neurodivergence and/or mental illness. I personally know at least one troubled genius who writes like this, if not worse, but who is more than capable of very serious intellectual projects and research. The nature of autism tends to make it harder to write for a general audience without coming off as bizarre, and in my experience they are better at interactive, 1-on-1 discussions where you can ask questions to course-correct them away from burrowing too deep into their own head.
The mass delusion of, "I don't understand what I'm reading, therefore it must be produced by an llm."
I think it's a pretty serious problem. Not that llm text exists on the internet, but that reasonable people are reflexively closed off to creativity because the mere existence of the possibility that something is created by an llm is in their minds grounds for disqualification.
A common property of llm psychosis is the development of an internal vocabulary that the llm learns, often reusing words but adopting specific meanings, for some reason quantum and quantic are very popular for this.