I like the looks of this, and the idea behind it, but TypeScriot via Deno is an audited language with a good security model, a good type system, and sandboxing in an extremely well-hardened runtime. It's also a language that LLMs are exceptionally well-trained on. What does Mog offer that's meaningfully superior in an agent context?
I see that Deno requires a subprocess which introduces some overhead, and I might be naive to think so, but that doesn't seem like it would matter much when agent round-trip and inference time is way, way longer than any inefficiency a subprocess would introduce. (edit: I realized in some cases the round-trip time may be negligible if the agent is local, but inference is still very slow)
I admittedly do prefer the syntax here, but I'm more so asking these questions from a point of pragmatism over idealism. I already use Deno because it's convenient, practical, and efficient rather than ideal.
andreybaskov 17 hours ago [-]
It's a legitimate question to ask about any new language post AI - given there is no training dataset, any other language would work better with AI.
The bigger problem is maintainability over the long term, Deno is built by Node.js creator and is maintained for half a decade now, that's hard to compete with. In a way it's much more about social trust rather than particular syntax.
reitzensteinm 13 hours ago [-]
Coding is a verifiable domain, so I think you actually have it backwards on that first point. We can now synthesize Stack Overflow sized datasets for an arbitrary new language, and use those to train LLMs to understand it.
It's expensive of course, but if a new language is genuinely better for LLMs to write and understand, that would not be an issue.
embedding-shape 16 hours ago [-]
> given there is no training dataset, any other language would work better with AI.
I guess it depends on what "would work better" really means, but I don't think it's always a given. I've made my own languages, there is no available training set on exactly those, but AI with a prompt can figure out how to effectively use them as much as any other language, it seems to me. I guess it helps that most languages are more similar to each other than different, but even experimenting with new syntax seems to work out OK for me.
TheDong 10 hours ago [-]
To me it seems like a pretty strong given because context windows are an important thing.
I can tell an llm "write hello world in C", and it will produce a valid program with just that context, without needing the C language spec nor stdlib definition in the context window because they're baked into the model weights.
As such, I can use the context window to for example provide information about my own function signatures, libraries, and objectives.
For a language not well-represented in the training data-set, a chunk of my context has to be permanently devoted to the stdlib and syntax, and while coding it will have to lookup stdlib function signatures and such using up additional context.
Perhaps you're trying to argue that the amount of tokens needed to describe the language, the stdlib, the basic tooling to look up function signatures, commands to compile, etc is not enough tokens to have a meaningful impact on the context window overall?
verdverm 16 hours ago [-]
I'd also add all the other things that users expect around a language:
- GitHub syntax highlighting
- IDE integrations, LSP
- Modules and dependency management
I don't see an agent first language becoming a thing while humans are still ultimately responsible.
stillpointlab 17 hours ago [-]
One thing that comes to mind, more of a first reaction than a considered opinion, is the complexity of V8 getting in the way. JavaScript and Typescript present a challenge to language implementors.
There is something to be said about giving AIs a clean foundation on which to build their own language. This allows evolution of such systems to go all the way into the compiler, beyond tooling.
0cf8612b2e1e 16 hours ago [-]
I cannot comment on the new language, but Typescript is a huge spec. Yes, it has guardrails, but there is a lot of complexity to handle.
Something purpose built to enable embedding allows it to be used in more contexts. Maybe I want a Mog plugin for my latest video game. Embedding JS is possible, but no fun.
steve_adams_86 13 hours ago [-]
Right, this is probably the most compelling case for Mog. TypeScript is a beast. Everything you've mentioned here has been an obstacle for me at some point(s) or another.
I didn't mean to suggest there's no need for Mog either. I love to see developments like this. Deno is a practical solution for me today, but I see why it isn't a perfect forever-solution too.
anilgulecha 9 hours ago [-]
But typescript is already trained in every model, and needs no additional work.
belisarius222 16 hours ago [-]
I generally agree. TypeScript is a great language, and JS runtimes have certainly had a lot of money and effort poured into them for a long time. I would add WASM to this category, as probably the closest thing to Mog. Write a program in some language, compile it to WASM, and load it into the host process. This is (probably) nice and safe, and relatively performant.
Since it's new, Mog will likely not yet beat existing systems at basically anything. Its potential lies in having better performance and a much smaller total system footprint and complexity than the alternatives. WASM is generally interpreted -- you can compile it, but it wasn't really designed for that as far as I know.
More generally, I think new execution environments are good opportunities for new languages that directly address the needs of that environment. The example that comes to mind is JavaScript, which turned webpages into dynamically loaded applications. AI agents have such heavy usage and specific problems that a language designed to be both written and executed by them is worth a shot in my opinion.
Wasm is definitely designed to be compiled, either ahead of time or JITed. Wasm interpreters are few and far between.
belisarius222 15 hours ago [-]
Huh you're right. I had worked with interpreted WASM before, which is why I thought that was more common.
WASM is a great system, but quite complex -- the spec for Mog is roughly 100x smaller.
castral 17 hours ago [-]
I agree with this take. What does this bring to the table that can't be done with pretty much any preexisting toolset? Hell, even bash and chroot jail...
17 hours ago [-]
iso-logi 11 hours ago [-]
This language could mog the ASU frat leader into a career ending cortisol spike. Clav finally has some clapback after getting trolled by an foids all day.
usmanmehmood55 5 hours ago [-]
I pay my internet bills for this
groundzeros2015 11 hours ago [-]
Extremely online friends
rapind 14 hours ago [-]
For me this is Gleam. Fairly small lang, type safe, compiled, NO NULLS (very important IMO), good FFI, code is readable, and... you get the BEAM!
Agents can pretty much iterate on their own.
The most important thing for me, at least for now (and IMO the foreseeable future) is being able to review and read the output code clearly. I am the bottleneck in the agent -> human loop, so optimizing for that by producing clear and readable code is a massive priority. Gleam eliminates a ton of errors automatically so my reviews are focused on mostly business logic (also need to explicitly call out redundant code often enough).
I could see an argument for full on Erlang too, but I like the static typing.
shakna 14 hours ago [-]
How's Gleam's IO story today? Still need to go via OTP?
rapind 14 hours ago [-]
Under the hood you're still calling into BEAM/OTP but most operations are covered in gleam, at least I've very rarely needed to FFI into Erlang (couple times to extend what a library provides with a custom edge case).
gleam/io, mist, gleam/otp, sqlight, gleam_pgo, etc.
shakna 4 hours ago [-]
gleam/io doesn't actually support file IO, though, does it? Only printing to stdout.
So you'd need to use FFI just to write to disk, right?
rapind 2 hours ago [-]
There’s simplifile
dnautics 7 hours ago [-]
no nulls is truly not a problem. Of all the problems I have seen, I have never seen Claude mess up with nulls in Elixir.
TheDong 11 hours ago [-]
I think the AI labs need to be the ones to build AI-specific languages so they can include a huge corpus in the model training data-set, and then do RL on it producing useful and correct programs in that language.
If anthropic makes "claude-script", it'll outmog this language with massive RL-maxing. I hope your cortisol is ready for that.
If you want to try and mog claude with moglang, I think you need to make a corpus of several terrabytes of valid useful "mog" programs, and wait for that to get included in the training dataset.
mhink 13 hours ago [-]
One nitpick I noticed:
> String Slicing
> You can extract a substring using bracket syntax with a range: s[start:end]. Both start and end are byte offsets. The slice includes start and excludes end.
Given that all strings are UTF-8, I note that there's not a great way to iterate over strings by _code point_. Using byte offsets is certainly more performant, but I could see this being a common request if you're expecting a lot of string manipulation to happen in these programs.
Other than that, this looks pretty cool. Unlike other commenters, I kinda like the lack of operator precedence. I wouldn't be surprised if it turns out to be not a huge problem, since LLMs generating code with this language would be pattern-matching on existing code, which will always have explicit parentheses.
Retr0id 17 hours ago [-]
> Compiled to native code for low-latency plugin execution – no interpreter overhead, no JIT, no process startup cost.
If you're running the compiled code in-process, how is that not JIT? And isn't that higher-latency than interpreting? Tiered-JIT (a la V8) solves exactly this problem.
Edit: Although the example programs show traditional AOT compile/execute steps, so "no process startup cost" is presumably a lie?
belisarius222 16 hours ago [-]
Mog is AOT-compiled, not JIT'd.
JIT means the code is interpreted until some condition kicks in to trigger compilation. This is obviously common and provides a number of advantages, but it has downsides too:
1) Code might run slowly at first.
2) It can be difficult to predict performance -- when will the JIT kick in? How well will it compile the code?
With Mog, you do have to pay the up-front cost of compiling the program. However, what I said about "no process startup cost" is true: there is no other OS process. The compiler runs in process, and then the compiled machine code is loaded into the process. Trying to do this safely is an unusual goal as far as I can tell. One of the consequences of this security posture is that the compiler and host become part of the trusted computing base. JITs are not the simplest things in the world, and not the easiest things to keep secure either. The Mog compiler is written entirely in safe Rust for this reason.
This up-front compilation cost is paid once, then the compiled code can be reused. If you have a pre-tool-use hook, or some extension to the agent itself, that code runs thousands of times, or more. Ahead-of-time compilation is well-suited for this task.
If this is used for writing a script that agent runs once, then JIT compilation might turn out to be faster. But those scripts are often short, and our compiler is quite fast for them as it is in the benchmarking that I've done -- there are benchmarking scripts in the repo, and it would be interesting to extend them to map out this landscape more.
Also, in my experience, in this scenario, the vast majority of the total latency of waiting for the agent to do what you asked it is due to waiting for an LLM to finish responding, not compiling or executing the script it generated. So I've prioritized the end-to-end performance of Mog code that runs many times.
sjrd 16 hours ago [-]
Hum, IIRC, using your definition of an AOT compiler, then V8 is an AOT compiler. V8 never interprets code. It immediately compiles it to machine code. It improves it later, but it's never slow.
Maxatar 14 hours ago [-]
V8 is a JIT compiler that uses the Ignition [1] interpreter and only compiles sections of code down to machine instructions once they've been marked as hot via TurboFan [2].
V8 can also go back and forth from machine instructions back to bytecode if it identifies that certain optimization assumptions no longer hold.
That literally is the definition of JIT, it does a quick parse, compiles hot parts and improves it later on
dana321 17 hours ago [-]
They could have just used cranelift for jit, but no..
skybrian 10 hours ago [-]
Coding agents gain a lot of power from being able to download specialized utility programs off the Internet, using apt-get or whatever. So it seems like running in a VM is going to be more popular?
A limited plugin API is interesting in some ways, but it has "rewrite it in Rust" energy. Maybe it's easier to flesh out a new library ecosystem using a coding agent, though?
mkl 15 hours ago [-]
> it's intended to minimize foot-guns to lower the error rate when generating Mog code. This is why Mog has no operator precedence: non-associative operations have to use parentheses, e.g. (a + b) * c.
Almost all the code LLMs have been trained on uses operator precedence, so no operator precedence seems like a massive foot-gun.
NewsaHackO 15 hours ago [-]
I agree. But also, do people rely on operator precedence when coding? I just automatically use parentheses in general.
Bootvis 15 hours ago [-]
Also for a * b + c ? I know clang tidy wants me to do this but I think it is overkill.
NewsaHackO 15 hours ago [-]
I would usually. Sometimes if it's like 2 * x + b, I would not, but personally, I hate chasing down bugs like this, so just add it to remove ambiguity. Also, for like b + 2 * a, I will almost always use parentheses.
disconcision 9 hours ago [-]
> so no operator precedence seems like a massive foot-gun
how do you mean? given that spec, ambiguous code just won't compile. that could potentially be inefficient, but not a foot gun.
reverius42 13 hours ago [-]
Not sure that matters when the entire language spec is expected to be part of every context window? The LLM should be capable of using the language spec to express intended precedence.
saithound 16 hours ago [-]
> When asking people to write code in a language, these restrictions could be onerous. But LLMs don't care, and the less expressivity you trust them with, the better.
But LLMs very much do care. They are measurably worse when writing code in languages with non-standard or non-existent operator precedence. This is not surprising given how they learn programmming.
omrimaya 4 hours ago [-]
The capability-based permission propagation is the part I'd want to stress-test first, in practice we found that the interesting failure mode isn't the agent escaping its sandbox, it's the agent calling back into the host in ways that are technically permitted but semantically wrong
rtfeldman 7 hours ago [-]
Very cool!
The permission model is almost identical to Roc's - https://www.roc-lang.org/platforms - although Roc isn't designed for "Syntax only an AI could love" (among many other differences between the two languages - but still, there are very few languages taking this approach to permissions).
I wonder if lack of code as a training/example dataset for LLMs could be a problem to produce well enough Mog code reliably.
It feels like a custom defined DSL (domain specific language) problem.
Models are good at generating code that already have a large corpus of examples, documentation, and training data behind them. A brand new language may be good for LLM to speak on, but it is hard for LLMs to produce it reliably until it becomes widely used. And it is hard for it to become widely used until models can already produce it well.
zelphirkalt 15 hours ago [-]
Argument 1 ("Syntax Only an AI Could Love") sounds dubious. I am probably not alone in being paranoid enough, to always put those parentheses, even if I am 90% sure, that there is operator precedence. In lispy languages the ambiguity never even arises, and I put many parentheses, and I like it that way, because it enables great structural editing of code. No implicit type coercion has also been part of normal for-human programming languages (see SMLNJ for example).
> There's also less support in Mog for generics, and there's absolutely no support for metaprogramming, macros, or syntactic abstraction.
OK that does immediately make it boring, I give them that much.
lukasb 13 hours ago [-]
On a quick scan, what it's missing is data tainting. We've had that tech for a while and it's perfectly suited to the age of prompt injection.
Garlef 16 hours ago [-]
Awesome!
A few questions:
- Is there a list of host languages?
- Can it live in the browser? (= is JS one of the host languages?)
belisarius222 16 hours ago [-]
The host is written in Rust, with `extern "C"`, which makes it able to be loaded as a C library by programs written in other languages. Most languages have support for this.
It's also designed to be run in an event loop. I've tested this with Bun's event loop that runs TypeScript. I haven't tried it with other async runtimes, but it should be doable.
As for the browser, I haven't tried it, but you might be able to compile it to WASM -- the async stuff would be the hardest part of that, I suspect. Could be cool!
roxolotl 13 hours ago [-]
I’m still waiting for someone to build a good lisp harness. Stick an agent in a lisp repl and they can change literally anything they want easily.
valcron1000 13 hours ago [-]
I've been thinking of doing the exact same thing. Preserve context as images and die. Expose a single tool called "eval". You could have a extremely tight editor integration using something like SLIME.
xXSLAYERXx 12 hours ago [-]
Gosh - I thought I was bleeding edge with my instructions to codex, with all my .md files and such. Lots more to learn!
phren0logy 14 hours ago [-]
I am disappointed at the amount of negativity here. HN generally loves an experimental domain-specific language, no matter how janky. To be clear, I don't know if this is janky, but the knee-jerk anti-AI sentiment is not intellectually stimulating.
deevus 11 hours ago [-]
If you think this is bad, visit Lobsters.
thatxliner 8 hours ago [-]
The sandbox function should be similiar to Lua, right?
JosephjackJR 16 hours ago [-]
Ran into the same thing. SQLite works until you need cold start recovery or WAL contention with concurrent agents. Built a dedicated memory layer for agent workloads - happy to share: https://github.com/RYJOX-Technologies/Synrix-Memory-Engine
zelphirkalt 15 hours ago [-]
Or, when you have a Django project and started out on SQLite, but then begrudgingly introduce M-to-N relationships, but then suddenly notice, that many things you might want to do or implement with those M-to-N relationships are not supported by SQLite. Then you suddenly wish you had started with Postgres right away.
rapind 14 hours ago [-]
There are definitely some caveats / tradeoffs with SQLite, but I can't think of any that are specifically related to many to many relationships. Which features did you find missing? Lateral joins maybe?
zelphirkalt 13 hours ago [-]
I only remember from my last Django project, that I started out thinking: "OK, I will do things properly, no many-to-many relationships..." then at some point saw the need for them, or manually creating the intermediate relation and at that point using the Django way was what I was supposed to do. But then I got errors for some things I wanted about not being supported by SQLite.
The project is here: https://codeberg.org/ZelphirKaltstahl/web-app-vocabulary-tra... But I left it unfinished and a quick grep does not yield comments or something that explains why at some place I do something to circumvent the SQLite problems. I remember though, that I basically swore to myself, that I would not ever use SQLite in production with Django ORM. And if I am not using it in production, then testing also better should not be using it, because one should test with the same RDBMS as runs in production, or risk unexpected issues suddenly only happening in production. So SQLite is out for anything serious in Django projects for me.
libre-man 16 hours ago [-]
Don't know if others have this issue, but for me I can't scroll on Firefox.
jjice 16 hours ago [-]
Firefox 148.0 MacOS Tahoe - I'm able to scroll.
FireInsight 17 hours ago [-]
I looked at the brainrotty name[1] and instantly assumed AI slop, but I'm glad the website was upfront about that.
Would have been a blockchain language 10 years ago.
gozzoo 17 hours ago [-]
How is Mog different than Mojo?
OSaMaBiNLoGiN 16 hours ago [-]
Doesn't need to be its own language.
guywithahat 15 hours ago [-]
I feel like a small language designed specifically for LLM's should/will exist someday. Certainly I've found I really like simpler compiled languages with more complete memory models, I would imagine a language designed for LLM's and agents could improve workflows someday.
reverius42 13 hours ago [-]
Kind of weird to say that it should or will exist when it does exist and it's the thing we're discussing.
dana321 17 hours ago [-]
Its disheartening to see these crop up after spending 25 years through trial and error learning how to write programming languages.
Please think twice before releasing these, if you're going to do it come up with at least one original idea that nobody else has done before.
Why didn't you just call it "bad rust copy"?
Rendered at 12:12:29 GMT+0000 (Coordinated Universal Time) with Vercel.
I see that Deno requires a subprocess which introduces some overhead, and I might be naive to think so, but that doesn't seem like it would matter much when agent round-trip and inference time is way, way longer than any inefficiency a subprocess would introduce. (edit: I realized in some cases the round-trip time may be negligible if the agent is local, but inference is still very slow)
I admittedly do prefer the syntax here, but I'm more so asking these questions from a point of pragmatism over idealism. I already use Deno because it's convenient, practical, and efficient rather than ideal.
The bigger problem is maintainability over the long term, Deno is built by Node.js creator and is maintained for half a decade now, that's hard to compete with. In a way it's much more about social trust rather than particular syntax.
It's expensive of course, but if a new language is genuinely better for LLMs to write and understand, that would not be an issue.
I guess it depends on what "would work better" really means, but I don't think it's always a given. I've made my own languages, there is no available training set on exactly those, but AI with a prompt can figure out how to effectively use them as much as any other language, it seems to me. I guess it helps that most languages are more similar to each other than different, but even experimenting with new syntax seems to work out OK for me.
I can tell an llm "write hello world in C", and it will produce a valid program with just that context, without needing the C language spec nor stdlib definition in the context window because they're baked into the model weights.
As such, I can use the context window to for example provide information about my own function signatures, libraries, and objectives.
For a language not well-represented in the training data-set, a chunk of my context has to be permanently devoted to the stdlib and syntax, and while coding it will have to lookup stdlib function signatures and such using up additional context.
Perhaps you're trying to argue that the amount of tokens needed to describe the language, the stdlib, the basic tooling to look up function signatures, commands to compile, etc is not enough tokens to have a meaningful impact on the context window overall?
- GitHub syntax highlighting
- IDE integrations, LSP
- Modules and dependency management
I don't see an agent first language becoming a thing while humans are still ultimately responsible.
There is something to be said about giving AIs a clean foundation on which to build their own language. This allows evolution of such systems to go all the way into the compiler, beyond tooling.
Something purpose built to enable embedding allows it to be used in more contexts. Maybe I want a Mog plugin for my latest video game. Embedding JS is possible, but no fun.
I didn't mean to suggest there's no need for Mog either. I love to see developments like this. Deno is a practical solution for me today, but I see why it isn't a perfect forever-solution too.
Since it's new, Mog will likely not yet beat existing systems at basically anything. Its potential lies in having better performance and a much smaller total system footprint and complexity than the alternatives. WASM is generally interpreted -- you can compile it, but it wasn't really designed for that as far as I know.
More generally, I think new execution environments are good opportunities for new languages that directly address the needs of that environment. The example that comes to mind is JavaScript, which turned webpages into dynamically loaded applications. AI agents have such heavy usage and specific problems that a language designed to be both written and executed by them is worth a shot in my opinion.
WASM is a great system, but quite complex -- the spec for Mog is roughly 100x smaller.
Agents can pretty much iterate on their own.
The most important thing for me, at least for now (and IMO the foreseeable future) is being able to review and read the output code clearly. I am the bottleneck in the agent -> human loop, so optimizing for that by producing clear and readable code is a massive priority. Gleam eliminates a ton of errors automatically so my reviews are focused on mostly business logic (also need to explicitly call out redundant code often enough).
I could see an argument for full on Erlang too, but I like the static typing.
gleam/io, mist, gleam/otp, sqlight, gleam_pgo, etc.
So you'd need to use FFI just to write to disk, right?
If anthropic makes "claude-script", it'll outmog this language with massive RL-maxing. I hope your cortisol is ready for that.
If you want to try and mog claude with moglang, I think you need to make a corpus of several terrabytes of valid useful "mog" programs, and wait for that to get included in the training dataset.
> String Slicing > You can extract a substring using bracket syntax with a range: s[start:end]. Both start and end are byte offsets. The slice includes start and excludes end.
Given that all strings are UTF-8, I note that there's not a great way to iterate over strings by _code point_. Using byte offsets is certainly more performant, but I could see this being a common request if you're expecting a lot of string manipulation to happen in these programs.
Other than that, this looks pretty cool. Unlike other commenters, I kinda like the lack of operator precedence. I wouldn't be surprised if it turns out to be not a huge problem, since LLMs generating code with this language would be pattern-matching on existing code, which will always have explicit parentheses.
If you're running the compiled code in-process, how is that not JIT? And isn't that higher-latency than interpreting? Tiered-JIT (a la V8) solves exactly this problem.
Edit: Although the example programs show traditional AOT compile/execute steps, so "no process startup cost" is presumably a lie?
JIT means the code is interpreted until some condition kicks in to trigger compilation. This is obviously common and provides a number of advantages, but it has downsides too: 1) Code might run slowly at first. 2) It can be difficult to predict performance -- when will the JIT kick in? How well will it compile the code?
With Mog, you do have to pay the up-front cost of compiling the program. However, what I said about "no process startup cost" is true: there is no other OS process. The compiler runs in process, and then the compiled machine code is loaded into the process. Trying to do this safely is an unusual goal as far as I can tell. One of the consequences of this security posture is that the compiler and host become part of the trusted computing base. JITs are not the simplest things in the world, and not the easiest things to keep secure either. The Mog compiler is written entirely in safe Rust for this reason.
This up-front compilation cost is paid once, then the compiled code can be reused. If you have a pre-tool-use hook, or some extension to the agent itself, that code runs thousands of times, or more. Ahead-of-time compilation is well-suited for this task.
If this is used for writing a script that agent runs once, then JIT compilation might turn out to be faster. But those scripts are often short, and our compiler is quite fast for them as it is in the benchmarking that I've done -- there are benchmarking scripts in the repo, and it would be interesting to extend them to map out this landscape more.
Also, in my experience, in this scenario, the vast majority of the total latency of waiting for the agent to do what you asked it is due to waiting for an LLM to finish responding, not compiling or executing the script it generated. So I've prioritized the end-to-end performance of Mog code that runs many times.
V8 can also go back and forth from machine instructions back to bytecode if it identifies that certain optimization assumptions no longer hold.
[1] https://v8.dev/docs/ignition
[2] https://v8.dev/docs/turbofan
A limited plugin API is interesting in some ways, but it has "rewrite it in Rust" energy. Maybe it's easier to flesh out a new library ecosystem using a coding agent, though?
Almost all the code LLMs have been trained on uses operator precedence, so no operator precedence seems like a massive foot-gun.
how do you mean? given that spec, ambiguous code just won't compile. that could potentially be inefficient, but not a foot gun.
But LLMs very much do care. They are measurably worse when writing code in languages with non-standard or non-existent operator precedence. This is not surprising given how they learn programmming.
The permission model is almost identical to Roc's - https://www.roc-lang.org/platforms - although Roc isn't designed for "Syntax only an AI could love" (among many other differences between the two languages - but still, there are very few languages taking this approach to permissions).
If you're curious, I've talked about details of how Roc's permission model works in other places, most recently in this interview: https://youtu.be/gs7OLhdZJvk?si=wTFI7Ja85qdXJWiW
It feels like a custom defined DSL (domain specific language) problem.
Models are good at generating code that already have a large corpus of examples, documentation, and training data behind them. A brand new language may be good for LLM to speak on, but it is hard for LLMs to produce it reliably until it becomes widely used. And it is hard for it to become widely used until models can already produce it well.
> There's also less support in Mog for generics, and there's absolutely no support for metaprogramming, macros, or syntactic abstraction.
OK that does immediately make it boring, I give them that much.
A few questions:
- Is there a list of host languages?
- Can it live in the browser? (= is JS one of the host languages?)
It's also designed to be run in an event loop. I've tested this with Bun's event loop that runs TypeScript. I haven't tried it with other async runtimes, but it should be doable.
As for the browser, I haven't tried it, but you might be able to compile it to WASM -- the async stuff would be the hardest part of that, I suspect. Could be cool!
The project is here: https://codeberg.org/ZelphirKaltstahl/web-app-vocabulary-tra... But I left it unfinished and a quick grep does not yield comments or something that explains why at some place I do something to circumvent the SQLite problems. I remember though, that I basically swore to myself, that I would not ever use SQLite in production with Django ORM. And if I am not using it in production, then testing also better should not be using it, because one should test with the same RDBMS as runs in production, or risk unexpected issues suddenly only happening in production. So SQLite is out for anything serious in Django projects for me.
[1] https://www.merriam-webster.com/slang/mog
https://news.ycombinator.com/newsguidelines.html#generated
Would have been a blockchain language 10 years ago.
Please think twice before releasing these, if you're going to do it come up with at least one original idea that nobody else has done before.
Why didn't you just call it "bad rust copy"?