ah, shame it's for Show HN only. I am too embarrassed to Show HN anything I used AI for/with, like: https://www.susmel.com/stacky/ or https://susmel.com/rolly/ (which isn't a game yet - you can use shift for speed and it has a double jump though!)
yuppiepuppie 1 days ago [-]
Love the music! Don't be afraid of putting them up and showing them off if that's something you want to do. They look great!
aed 1 days ago [-]
These are both amazing! And I want to encourage you to do a Show HN. I think showcasing things people are building with AI is good. I'm actually very close and putting the polishing touches on something I'm building that will allow people to play Micropolis over API/MCP so you can watch agents build cities (mostly terribly, but it's quite fun).
Keyframe 1 days ago [-]
Thanks man, I appreciate the encouragement. As a (really) long-time programmer it just feels all wrong. I understand it was all my input and that without me it would come out like that at all, but still by not touching code much or at all feels just bizarre and wrong to take credit for any of it, let alone "i did dis".
michaelanckaert 2 days ago [-]
This is great! We need more ASCII games/simulations and it's only a bonus if it's in Emacs :-)
vkazanov 1 days ago [-]
Author here.
I never intended this project be something like a wonder of 3dfx or something. In fact, the hope was to discuss implementation details.
Either way, for those interested: screenshots of both the terminal and graphical version are now included in the README.org.
mghackerlady 1 days ago [-]
This may be unrelated but I swear emacs has a color tile library for games, it's used in tetris iirc
I completely forgot about Tetris and its nice visuals! Thanks, will see if I can use the library for ElCity.
larsbrinkhoff 17 hours ago [-]
I think misinterpreting the "el" as Spanish is fun. In that vein, your game could be called ElCiudad.
vkazanov 16 hours ago [-]
Somebody somewhere suggested doing a clone of Tropico called ElPresidente, which is even cooler.
Btw, Lars, you have endlessly more experience in Elisp than I do. Do you maybe have any ideas/directions on how to make the graphical mode look... A bit more decent and snappy?
larsbrinkhoff 12 hours ago [-]
Sorry, I don't know anything about Emacs graphics. Some people confuse me with larsi, but I'm not that guy.
brimtown 1 days ago [-]
This is wonderful. Consider decoupling the core from Emacs, or packaging in a way that doesn’t require it as heavily.
I’ve been doing my own exploration of terminal ASCII games via Dwarf Fortress instead of SimCity. I’ve learned that letting a coding agent play is an interesting way to get feedback as well :)
I tried something similar with a roguelike I was prototyping last year. Ended up being more useful for finding edge cases than actual gameplay feedback - the agent would do things no human would ever try, like walking into walls repeatedly or hoarding useless items. Still caught a bunch of bugs I never would have found otherwise.
internet_points 1 days ago [-]
> Consider decoupling the core from Emacs, or packaging in a way that doesn’t require it as heavily.
but then we'd have to write an interface package to run it from emacs
larsbrinkhoff 1 days ago [-]
How would it be run without Emacs?
You might point out that there are things like elisp.lisp that purports to run Emacs Lisp in Common Lisp, but I'm not sure that's viable for anything but trivial programs. There's also something for Guile, but I remain unconvinced.
Why not just use the best known emacs lisp core, then? Like say emacs.
notpushkin 1 days ago [-]
To allow it to run on other lisp dialects as well.
(I’m just trying to defend GP’s point – I’m not a heavy lisp user myself, tbh.)
larsbrinkhoff 17 hours ago [-]
Portability across Lisp dialects is usually not a thing. Even Emacs Lisp and Common Lisp which are arguably pretty close rarely if ever share code.
You could make a frontend for dialect A to run code from dialect B. Those things have been toyed with, but never really took off. E.g. cl in Emacs can not accept real Common Lisp code.
I'm not arguing against the idea, I'm just curious how it would work because I see no realistic way to do it.
notpushkin 16 hours ago [-]
Gotcha. Too bad – I was hoping there was at least some (non-trivial) subset you can run on both :(
Any idea why is it not a thing? Is this level of interop not practical for some reason?
larsbrinkhoff 14 hours ago [-]
Lisp dialects have diverged quite a bit, and it would be a lot of work to bridge the differences to a degree approaching 100%. 90% is easy, but only works for small trivial programs.
I say this, having written a "95%" Common Lisp for Emacs (still a toy), and successfully ran an old Maclisp compiler and assembler in Common Lisp.
The idea being that business logic gets written in synchronous blocking functional logic equivalent to Lisp, which is conceptually no different than a spreadsheet. Then real-world side effects get handled by imperative code similar to Smalltalk, which is conceptually similar to a batch file or macro. A bit like pure functional executables that only have access to STDIN/STDOUT (and optionally STDERR and/or network/file streams) being run by a shell.
I think of these like backend vs frontend, or nouns vs verbs, or massless waves like photons vs massive particles like nucleons. Basically that there is no notion of time in functional programming, just state transitions where input is transformed into output (the code can be understood as a static graph). While imperative programming deals with state transformation where statically analyzing code is as expensive as just running it (the code must be traced to be understood as a graph). In other words, functional code can be easily optimized and parallelized, while imperative code generally can't be.
So in model-view-controller (MVC) programming, the model and view could/should be functional, while the controller (event handler) could/should be imperative. I believe that there may be no way to make functional code handle side effects via patterns like monads without forcing us to reason about it imperatively. Which means that impure functional languages like Haskell and Scala probably don't offer a free lunch, but are still worth learning.
Why this matters is that we've collectively decided to use imperative code for almost everything, relegating functional code to the road not taken. Which has bloated nearly all software by perhaps 10-100 times in terms of lines of code, conceptual complexity and even execution speed, making perhaps 90-99% of the work we do a waste of time or at least custodial.
It's also colored our perception of what programming is. "Real work" deals with values, while premature optimization deals with references and pointers. PHP (which was inspired by the shell) originally had value-passing semantics for arrays (and even subprocess fork/join orchestration) via copy-on-write, which freed developers from having to worry about efficiency or side effects. Unfortunately it was corrupted through design by committee when PHP 5 decided to bolt-on classes as references rather than unifying arrays and objects by making the "[]" and "." operators largely equivalent like JavaScript did. Alternative implementations like Hack could have fixed the fundamentals, but ended up offering little more than syntactic sugar and the mental load of having to consider an additional standard.
To my knowledge there has never been a mainstream FCIS language. ClojureScript is maybe the closest IMHO, or F#. Because of that, I mostly use declarative programming in my own work (where the spec is effectively the behavior) so that the internals can be treated as merely implementation details. Unfortunately that introduces some overhead because technical debt usually must be paid as I go, rather than left for future me. Meaning that it really only works well for waterfall, not agile.
I had always hoped to win the internet lottery so that I could build and test some of these alternative languages/frameworks/runtimes and other roads not taken by tech. The industry's failure to do that has left us with effectively single-threaded computers which run around 100,000 times slower today (at 100 times the cores per decade) than they would have if we hadn't abandoned true multicore superscalar processing and very large scale integration (VLSI) in the early 2000s when most R&D was outsourced or cancelled after the Dot Bomb and the mobile/embedded space began prioritizing lower cost and power usage.
GPUs kept going though, which is great for SIMD, but doesn't help us as far as getting real work done. AI is here and can recruit them, which is great too, but I fear that they'll make all code look like its been pair-programmed and over-engineered, where the cognitive load grows beyond the ability of mere humans to understand it. They may paint over the rot without renovating it basically.
I hope that there's still time to emulate a true multiple instruction multiple data (MIMD) runtime on SIMD hardware to run fully-parallelized FCIS code potentially millions of times faster than anything we have now for the same price. I have various approaches in mind for that, but making rent always comes first, especially in inflationary times.
It took me over 30 years to really understand this stuff at a level where I could distill it down to these (inadequate) metaphors. So maybe this is TMI, but I'll leave it here nonetheless in the hopes that it helps someone manifest the dream of personal supercomputing someday.
vkazanov 1 days ago [-]
It took me about 15 years (out of 20 in the industry) to arrive at similar ideas. Interestingly, I heard all the arguments many times before but somewhat obscured by the way function programming speaks of things.
For the purpose of this game spliting things into core/shell makes certain things super easy: saving and restoring state, undo, debugging, testing, etc.
And one more bit, relevant to this new reality we find outselves in. Having a bunch of pure functions merged into a very focused DSL makes it easy to extend the systems through LLMs: a description of well-understood inputs and outputs fits into limited context windows.
By the way.
It is true that dedicated languages never arrived but FCIS is not a language feature, it's more like a architectural paradigm.
So who cares?
zackmorris 6 hours ago [-]
That's a fair question. For me, it's about removing the steep learning curves and gatekeeping from computer science and tech. Because the realities of being a developer have all but consumed my career with busywork.
For example, when I first learned about the borrow checker in Rust, it didn't make sense to me, because I had mostly already transitioned to data-driven development (just use immutable objects with copy-on-write and accept using twice the memory which is cheap anyway). I had the same feeling when I saw the syntactic sugar in Ruby, because it's solving problems which I specifically left behind when I abandoned C++. So I feel that those languages resonate with someone currently working with C-style code, but not, say, Lisp or SQL. We should be asking more of our compilers, not changing ourselves to suit them.
Which comes down to the academic vs pragmatic debate. Simple vs easy. Except that we've made the simple complex and the easy hard.
So I hold a lot of criticism for functional languages too. They all seem to demand that developers transpile the solution in their minds to stuff like prefix notation. Their syntax usually doesn't even look like equations. Always a heavy emphasis on pedantry, none on ergonomics. So that by the time solutions are written, we can't read them anyway.
I believe that most of these problems would go away if we went back to first principles and wrote a developer-oriented language, but one that's formal with no magic.
For example, I would like to write a language that includes something like gofmt that can transpile a file or code block to prefix/infix/postfix notation, then evolve the parser to the point that it can understand all of them. Which I know sounds crazy, but that would let us step up to a level of abstraction where we aren't so much concerned with syntax anymore. Our solutions would be shaped to the problems, a bit like the DSL you mentioned. And someone else could always reshape the code to what they're used to for their own learning.
You're right that FCIS is currently more of a pattern than a syntax. So the language would need to codify it. Normally imperative code would have to run in unsafe blocks, but I'd like to ban those, because they inevitably contaminate everything, leaving us with cruft. One way to do that might be to disallow mutability everywhere. Const is what allows imperative code to be transpiled to functional code and vice versa.
Except then we run into the problem of side effects and managing state, which leads us to monads, which leads us to promises/futures/closures and the async/await pattern (today's goto) which brings us full circle to where we started (nondeterminism), so we want to avoid those too. So we'd need to codify execution boundaries. Rather than monads, we'd treat all code as functional sync/blocking, and imagine the imperative shell as outside the flow of execution, at the point where the environment changes state (like a human editing a cell in a spreadsheet). Maybe the imperative shell should use a regular grammar (type 3 in Chomsky's hierarchy) to manage state transitions like Redux but not be Turing-complete (so more like a state machine than flow control).
Except that state machines are hard to reason about above a few dozen states, especially with nested state machines. Thankfully state machines can be transpiled to coroutines and vice versa. So we can imagine the imperative shell sort of like a shader with const-only variables. An analogy might be using coroutines in Unity for sprite behavior, rather than polluting the main loop with switch() commands based on their state. I've been down both roads, and coroutines are so much easier to reason about that I'll never go back to state machines.
I should add that I realized only recently that monads can be thought of as enumerating every execution path in the logic, so sacrificing them might be premature. For example, if we have a monad that's a boolean or undefined, and we've written a boolean logic function, then it becomes trinary logic with the monad. Which is related to stuff like Prolog, Verilog/VHDL, SPICE and SAT solvers, because we can treat the intermediate code like a tree because Lisp can be transpiled to a tree and vice versa. Then we can put the tree in a solver with the categories/types of the monads and formally define the solution space for a range of inputs. Sort of like fuzzing, but without the uncertainty. So the language should formalize monads too, not for I/O, but for solving and synthesis, so that we can treat code like logic circuits (spreadsheets).
Anyway, this is the low-hanging fruit. I haven't gotten into stuff like atomic operators (banning locks and mutexes), content-addressable memories for parallelizing execution without caching, reprogrammable hardware for stuff like loop optimization, etc. All of this represents the "real work" that private industry refuses to do, because it has no incentive to help the competition enter the walled gardens which it profits from. Fixing this stuff is up to academia (which is being constantly undermined), or people who have won the internet lottery (which presents a chicken and egg problem because they can't win without the thing that gets them to the thing).
Note that even though designing this language would be ambitious, the end result would feel familiar, even ubiquitous. I'm imagining something that looks like JavaScript/PHP but with value-only argument passing via const variables to higher-order methods (or automatic conversion from side-effect-free flow control statements), with the parallel code handling symantecs of Octave/MATLAB, and some other frivolties thrown in like pattern matching, destructuring, really all of the bells and whistles that we've come to expect. It would auto-optimize to the fullest extent possible for a high-multicore machine (1000+ cores, optionally distributed on a network or the internet), so run millions of times faster (potentially infinitely faster) than most anything today that we're used to. Yes we'd still hit Amdahl's law, but not resource limits most of the time. And where some people might see a utopian dream, I see something pedestrian, even boring to design. A series of simple steps that are all obvious, but only from the perspective of having wasted a lifetime fighting the existing tools.
Sorry this got so long. Believe it or not, I tried to keep it as short as possible.
There is a screenshot in the README, and according to the github timestamp, the project hasn't been changed since you write this to add such a screen shot.
Search for the section labeled: Visual Demo
agumonkey 1 days ago [-]
I thought it was a ssh key fingerprint at first
Tiberium 1 days ago [-]
It seems like it was added by an LLM since it says "This is a simplified snapshot to show the general layout."
Notice how it says "simplified snapshot","general layout". I don't think this is the actual representation of how the game looks like :)
vkazanov 1 days ago [-]
Actually, this is a copy of the older version of interface with a few lines dropped. I failed to generate a decent gif last night, will add a screenshot.
Admittedly, while working on this, I did consult my LLMs advisor through gptel(https://github.com/karthink/gptel) with a few custom tools setup, which I cannot recommend enough.
PurpleRamen 1 days ago [-]
Is this technically a screenshot? I mean its text, not a picture, so more of an output-example.
apgwoz 1 days ago [-]
Can text be put on a screen? And can you take a picture of it when it is? Well, you might have a screenshot of text.
PurpleRamen 1 days ago [-]
That wasn't done here.
vkazanov 1 days ago [-]
There you go!
morkalork 1 days ago [-]
Congrats on your Seventh Sally!
somalihoaxes 1 days ago [-]
Of course. Emacs. Soon: Playing pool and poker in Emacs. Because why not? Everything is a nail, right?
devcraft_ai 1 days ago [-]
[dead]
Rendered at 22:43:58 GMT+0000 (Coordinated Universal Time) with Vercel.
I never intended this project be something like a wonder of 3dfx or something. In fact, the hope was to discuss implementation details.
Either way, for those interested: screenshots of both the terminal and graphical version are now included in the README.org.
https://www.masteringemacs.org/article/fun-games-in-emacs
Btw, Lars, you have endlessly more experience in Elisp than I do. Do you maybe have any ideas/directions on how to make the graphical mode look... A bit more decent and snappy?
I’ve been doing my own exploration of terminal ASCII games via Dwarf Fortress instead of SimCity. I’ve learned that letting a coding agent play is an interesting way to get feedback as well :)
https://github.com/brimtown/claude-fortress
but then we'd have to write an interface package to run it from emacs
You might point out that there are things like elisp.lisp that purports to run Emacs Lisp in Common Lisp, but I'm not sure that's viable for anything but trivial programs. There's also something for Guile, but I remain unconvinced.
(I’m just trying to defend GP’s point – I’m not a heavy lisp user myself, tbh.)
You could make a frontend for dialect A to run code from dialect B. Those things have been toyed with, but never really took off. E.g. cl in Emacs can not accept real Common Lisp code.
I'm not arguing against the idea, I'm just curious how it would work because I see no realistic way to do it.
Any idea why is it not a thing? Is this level of interop not practical for some reason?
I say this, having written a "95%" Common Lisp for Emacs (still a toy), and successfully ran an old Maclisp compiler and assembler in Common Lisp.
https://github.com/larsbrinkhoff/emacs-cl
https://github.com/PDP-6/ITS-138/blob/master/tools/maclisp.l...
Here, the point was to have everything in emacs completely, and also see if the architectural contraints make sense for elisp (and they do)
And have some fun, of course.
Finally RMS can play SimCity.
https://medium.com/ssense-tech/a-look-at-the-functional-core...
The idea being that business logic gets written in synchronous blocking functional logic equivalent to Lisp, which is conceptually no different than a spreadsheet. Then real-world side effects get handled by imperative code similar to Smalltalk, which is conceptually similar to a batch file or macro. A bit like pure functional executables that only have access to STDIN/STDOUT (and optionally STDERR and/or network/file streams) being run by a shell.
I think of these like backend vs frontend, or nouns vs verbs, or massless waves like photons vs massive particles like nucleons. Basically that there is no notion of time in functional programming, just state transitions where input is transformed into output (the code can be understood as a static graph). While imperative programming deals with state transformation where statically analyzing code is as expensive as just running it (the code must be traced to be understood as a graph). In other words, functional code can be easily optimized and parallelized, while imperative code generally can't be.
So in model-view-controller (MVC) programming, the model and view could/should be functional, while the controller (event handler) could/should be imperative. I believe that there may be no way to make functional code handle side effects via patterns like monads without forcing us to reason about it imperatively. Which means that impure functional languages like Haskell and Scala probably don't offer a free lunch, but are still worth learning.
Why this matters is that we've collectively decided to use imperative code for almost everything, relegating functional code to the road not taken. Which has bloated nearly all software by perhaps 10-100 times in terms of lines of code, conceptual complexity and even execution speed, making perhaps 90-99% of the work we do a waste of time or at least custodial.
It's also colored our perception of what programming is. "Real work" deals with values, while premature optimization deals with references and pointers. PHP (which was inspired by the shell) originally had value-passing semantics for arrays (and even subprocess fork/join orchestration) via copy-on-write, which freed developers from having to worry about efficiency or side effects. Unfortunately it was corrupted through design by committee when PHP 5 decided to bolt-on classes as references rather than unifying arrays and objects by making the "[]" and "." operators largely equivalent like JavaScript did. Alternative implementations like Hack could have fixed the fundamentals, but ended up offering little more than syntactic sugar and the mental load of having to consider an additional standard.
To my knowledge there has never been a mainstream FCIS language. ClojureScript is maybe the closest IMHO, or F#. Because of that, I mostly use declarative programming in my own work (where the spec is effectively the behavior) so that the internals can be treated as merely implementation details. Unfortunately that introduces some overhead because technical debt usually must be paid as I go, rather than left for future me. Meaning that it really only works well for waterfall, not agile.
I had always hoped to win the internet lottery so that I could build and test some of these alternative languages/frameworks/runtimes and other roads not taken by tech. The industry's failure to do that has left us with effectively single-threaded computers which run around 100,000 times slower today (at 100 times the cores per decade) than they would have if we hadn't abandoned true multicore superscalar processing and very large scale integration (VLSI) in the early 2000s when most R&D was outsourced or cancelled after the Dot Bomb and the mobile/embedded space began prioritizing lower cost and power usage.
GPUs kept going though, which is great for SIMD, but doesn't help us as far as getting real work done. AI is here and can recruit them, which is great too, but I fear that they'll make all code look like its been pair-programmed and over-engineered, where the cognitive load grows beyond the ability of mere humans to understand it. They may paint over the rot without renovating it basically.
I hope that there's still time to emulate a true multiple instruction multiple data (MIMD) runtime on SIMD hardware to run fully-parallelized FCIS code potentially millions of times faster than anything we have now for the same price. I have various approaches in mind for that, but making rent always comes first, especially in inflationary times.
It took me over 30 years to really understand this stuff at a level where I could distill it down to these (inadequate) metaphors. So maybe this is TMI, but I'll leave it here nonetheless in the hopes that it helps someone manifest the dream of personal supercomputing someday.
For the purpose of this game spliting things into core/shell makes certain things super easy: saving and restoring state, undo, debugging, testing, etc.
And one more bit, relevant to this new reality we find outselves in. Having a bunch of pure functions merged into a very focused DSL makes it easy to extend the systems through LLMs: a description of well-understood inputs and outputs fits into limited context windows.
By the way.
It is true that dedicated languages never arrived but FCIS is not a language feature, it's more like a architectural paradigm.
So who cares?
For example, when I first learned about the borrow checker in Rust, it didn't make sense to me, because I had mostly already transitioned to data-driven development (just use immutable objects with copy-on-write and accept using twice the memory which is cheap anyway). I had the same feeling when I saw the syntactic sugar in Ruby, because it's solving problems which I specifically left behind when I abandoned C++. So I feel that those languages resonate with someone currently working with C-style code, but not, say, Lisp or SQL. We should be asking more of our compilers, not changing ourselves to suit them.
Which comes down to the academic vs pragmatic debate. Simple vs easy. Except that we've made the simple complex and the easy hard.
So I hold a lot of criticism for functional languages too. They all seem to demand that developers transpile the solution in their minds to stuff like prefix notation. Their syntax usually doesn't even look like equations. Always a heavy emphasis on pedantry, none on ergonomics. So that by the time solutions are written, we can't read them anyway.
I believe that most of these problems would go away if we went back to first principles and wrote a developer-oriented language, but one that's formal with no magic.
For example, I would like to write a language that includes something like gofmt that can transpile a file or code block to prefix/infix/postfix notation, then evolve the parser to the point that it can understand all of them. Which I know sounds crazy, but that would let us step up to a level of abstraction where we aren't so much concerned with syntax anymore. Our solutions would be shaped to the problems, a bit like the DSL you mentioned. And someone else could always reshape the code to what they're used to for their own learning.
You're right that FCIS is currently more of a pattern than a syntax. So the language would need to codify it. Normally imperative code would have to run in unsafe blocks, but I'd like to ban those, because they inevitably contaminate everything, leaving us with cruft. One way to do that might be to disallow mutability everywhere. Const is what allows imperative code to be transpiled to functional code and vice versa.
Except then we run into the problem of side effects and managing state, which leads us to monads, which leads us to promises/futures/closures and the async/await pattern (today's goto) which brings us full circle to where we started (nondeterminism), so we want to avoid those too. So we'd need to codify execution boundaries. Rather than monads, we'd treat all code as functional sync/blocking, and imagine the imperative shell as outside the flow of execution, at the point where the environment changes state (like a human editing a cell in a spreadsheet). Maybe the imperative shell should use a regular grammar (type 3 in Chomsky's hierarchy) to manage state transitions like Redux but not be Turing-complete (so more like a state machine than flow control).
Except that state machines are hard to reason about above a few dozen states, especially with nested state machines. Thankfully state machines can be transpiled to coroutines and vice versa. So we can imagine the imperative shell sort of like a shader with const-only variables. An analogy might be using coroutines in Unity for sprite behavior, rather than polluting the main loop with switch() commands based on their state. I've been down both roads, and coroutines are so much easier to reason about that I'll never go back to state machines.
I should add that I realized only recently that monads can be thought of as enumerating every execution path in the logic, so sacrificing them might be premature. For example, if we have a monad that's a boolean or undefined, and we've written a boolean logic function, then it becomes trinary logic with the monad. Which is related to stuff like Prolog, Verilog/VHDL, SPICE and SAT solvers, because we can treat the intermediate code like a tree because Lisp can be transpiled to a tree and vice versa. Then we can put the tree in a solver with the categories/types of the monads and formally define the solution space for a range of inputs. Sort of like fuzzing, but without the uncertainty. So the language should formalize monads too, not for I/O, but for solving and synthesis, so that we can treat code like logic circuits (spreadsheets).
Anyway, this is the low-hanging fruit. I haven't gotten into stuff like atomic operators (banning locks and mutexes), content-addressable memories for parallelizing execution without caching, reprogrammable hardware for stuff like loop optimization, etc. All of this represents the "real work" that private industry refuses to do, because it has no incentive to help the competition enter the walled gardens which it profits from. Fixing this stuff is up to academia (which is being constantly undermined), or people who have won the internet lottery (which presents a chicken and egg problem because they can't win without the thing that gets them to the thing).
Note that even though designing this language would be ambitious, the end result would feel familiar, even ubiquitous. I'm imagining something that looks like JavaScript/PHP but with value-only argument passing via const variables to higher-order methods (or automatic conversion from side-effect-free flow control statements), with the parallel code handling symantecs of Octave/MATLAB, and some other frivolties thrown in like pattern matching, destructuring, really all of the bells and whistles that we've come to expect. It would auto-optimize to the fullest extent possible for a high-multicore machine (1000+ cores, optionally distributed on a network or the internet), so run millions of times faster (potentially infinitely faster) than most anything today that we're used to. Yes we'd still hit Amdahl's law, but not resource limits most of the time. And where some people might see a utopian dream, I see something pedestrian, even boring to design. A series of simple steps that are all obvious, but only from the perspective of having wasted a lifetime fighting the existing tools.
Sorry this got so long. Believe it or not, I tried to keep it as short as possible.
Search for the section labeled: Visual Demo
Notice how it says "simplified snapshot","general layout". I don't think this is the actual representation of how the game looks like :)
Admittedly, while working on this, I did consult my LLMs advisor through gptel(https://github.com/karthink/gptel) with a few custom tools setup, which I cannot recommend enough.