This is very much in line with what I found fascinating about optimizing microgpt for speed (0). Or rather, what I was able to do with it after doing so. It's so small and so fast to train, you can really dig deep into the optimization landscape. I've spent all my free time this past week digging into it.
0: https://entrpi.github.io/eemicrogpt/
(The writeup is from a few days ago, and I'm still running experiments before I do a big rewrite. Slowrun is good food for thought.)
linolevan 1 days ago [-]
There was this very interesting paper out of Stanford this last September about pretraining under the unlimited compute but limited data paradigm[0]. Pretty much exactly the same thing but with ~200M training tokens instead.
yeah, we do incorporate some of the findings from the paper in our repo! like aggressive regularization and ensembling.
_0ffh 23 hours ago [-]
I see you already mention diffusion - iirc there was a result not too long ago that diffusion models keep improving with more epochs for longer than AR models do.
sdpmas 23 hours ago [-]
diffusion is promising, but still an open question how much data efficient they are compared to AR. in practice, you can also train AR forever with high enough regularization, so let's see.
> Second-order optimizers and natural gradient methods
Do second order optimizers help improve data efficiency? I assumed they’d help you get to the same minimum faster (but this is way outside my wheelhouse).
sdpmas 23 hours ago [-]
yes! typically the optimizer that trains faster also get better data efficiency. it maybe not be absolutely true, but that has been my observation so far. also see https://arxiv.org/pdf/2510.09378 for second-order methods.
The above provides a nuanced theoretical view. GD inductive bias is probably better unless your model is misspecified
alyxya 22 hours ago [-]
Fundamentally I don't believe second-order methods get better data efficiency by itself, but changes to the optimizer can because the convergence behavior changes. ML theory lags behind the results in practice.
jbergqvist 14 hours ago [-]
Very interesting benchmark, excited to see what comes out of this. Considering humans are enourmously more sample efficient compared to today's models, it seems clear there's a lot of room to close that gap. The fact that they hit 5.5x in the first week with relatively straightforward changes suggests we're nowhere near the ceiling for data efficiency
sdpmas 14 hours ago [-]
absolutely!
londons_explore 15 hours ago [-]
I think there will be good headway in using the part-trained model to generate itself more training data in the form of making itself tasks, completing those tasks with many different approaches, evaluating which solution is best (using the same LLM as judge), and then differentially training on the best solutions vs the worst ones.
The challenge is that such an approach almost certainly requires a model with RLHF post-training, but this needs to be done in the pre training phase. But with infinity compute, this isn't an issue - you simply do the post-training many times.
lzaborowski 1 days ago [-]
I like the idea of flipping the constraint. Most ML benchmarks assume unlimited data and limited compute, so people optimize for speed.
If high-quality training data becomes the real bottleneck, then the interesting question is how much signal you can extract from the same dataset when compute is cheap.
kseniamorph 1 days ago [-]
Curious about the baseline choice. modded-nanogpt was optimized for wall-clock speed, not data efficiency, so it seems like an unusual reference point for this kind of benchmark. Why not vanilla NanoGPT?
timshel1 1 days ago [-]
Modded-nanogpt is also much more data efficient than vanilla napogpt, even if some of the individual optimizations trade off higher throughput for worse data efficiency.
sdpmas 1 days ago [-]
yes, agreed, modded-nanogpt is already a data-efficient variant of original nanogpt. just that the kinds of algorithms it allows are somewhat constrained because it optimizes for wall clock time.
archermarks 1 days ago [-]
Very cool idea. Interested to see how this progresses.
One question: how worried are you about over-training on this particular dataset? i.e. instead of generalizing you lean more toward memorization? Obviously you leave out a validation set but since you're meta-optimizing the model itself by its performance on the validation dataset you're still at risk of over-fitting.
sdpmas 1 days ago [-]
yes, good point. right now, it's somewhat hard to overfit because the meta-optimization extracts tiny bits of information. but over time, we will switch the validation set to some other random subset of the FineWeb or even entirely OOD datasets!
xpe 4 hours ago [-]
The question is not if but when. I hope the project authors acknowledge the problem directly: it is not merely a risk; it is a statistical certainty given enough time. So, what's the plan?
At the very least, track it. How will the project maintainers instrument this?
rcarmo 15 hours ago [-]
This feels like optimizing for local minima, but more verbosely. Even the epoch shuffling doesn’t seem like it would get them out of that pitfall.
navvyeanand 1 days ago [-]
Amazing job!
suddenlybananas 1 days ago [-]
Reminds me a fair bit of the BabyLM challenge. It would be good to give them a shout-out and see how this challenge differs.
sdpmas 1 days ago [-]
hey, it's Samip (behind the Slowrun repo). yeah that's a fair point, we will mention them in the blog. but there are a couple of major differences:
1. our emphasis is on using more compute to get better data efficiency. this is important because there are lots of hacky chances that will get lower loss, but when compared to general methods that leverage a lot of compute, they don't do so well. and you can already see how this emphasis on compute leads to different methods to BabyLM!
2. our reasoning behind the repo is not anything to do with how much data a child sees. and our dataset is not tailored towards that either. it's simple pretraining on random subset of the internet. we know there are better training algorithms that get lower loss on that data, and we are finding those.
Mumps 8 hours ago [-]
I feel like you really need to mention BabyLM. For example you have:
> Directions we think are wide open ... Curriculum learning
BabyLM and offshoot published a pretty convincing body of work on exactly that (which suggests it's not particularly relevant to LM training).
As I read your page, I really felt like the brevity-thoroughness tradeoff went the wrong way.
soraki_soladead 1 days ago [-]
also, BabyLM is more of a conference track / workshop than an open-repo competition which creates a different vibe
refulgentis 24 hours ago [-]
This looks awesome!!! I’m curious on the ensemble: does it mean “train 8 different models and pick the best one”? That’s what my mind jumps to, but that also seems wrong, because I assume we could just keep increasing the number of different models you train to get a win.
sdpmas 23 hours ago [-]
no ensembling means train 8 models and during inference avg logits of all 8 models to make a prediction.
magicalhippo 8 hours ago [-]
Maybe some newer references are better, but my mind went to the Model Soups paper[1]:
The conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder. In this paper, we revisit the second step of this procedure in the context of fine-tuning large pre-trained models, where fine-tuned models often appear to lie in a single low error basin. We show that averaging the weights of multiple models fine-tuned with different hyperparameter configurations often improves accuracy and robustness. Unlike a conventional ensemble, we may average many models without incurring any additional inference or memory costs -- we call the results "model soups."
That doesn't seem all that different to a MoE architecture.
yorwba 14 hours ago [-]
It's the opposite of a MoE architecture in many ways. MoE splits every individual feed-forward layer into many tiny subnetworks, only a small number of which contribute to the layer output, and they get trained together to complement each other.
Ensembling makes multiple copies of the entire model, trains them independently on the same task, and then has every copy contribute to the output.
Reducing computation vs. increasing it; operating at per-layer granularity vs. whole model; specialization vs. redundancy.
13 hours ago [-]
aplomb1026 23 hours ago [-]
[dead]
riajain2525 1 days ago [-]
[flagged]
shubhamintech 21 hours ago [-]
[flagged]
STARGA 24 hours ago [-]
[flagged]
whimsicalism 24 hours ago [-]
really no shame in comments like these?
devinplatt 23 hours ago [-]
It seems like best etiquette would be to have a username with "bot" in it and include something in the post explicitly indicating it's a bot (e.g. a signature).
This isn't even a new problem where a good cultural solution hasn't been figured out yet. Reddit has had bot etiquette for years.
0: https://entrpi.github.io/eemicrogpt/ (The writeup is from a few days ago, and I'm still running experiments before I do a big rewrite. Slowrun is good food for thought.)
[0] https://www.alphaxiv.org/abs/2509.14786
Still, just for reference, here's the paper I remembered: https://arxiv.org/pdf/2507.15857
> Second-order optimizers and natural gradient methods
Do second order optimizers help improve data efficiency? I assumed they’d help you get to the same minimum faster (but this is way outside my wheelhouse).
https://arxiv.org/abs/2006.10732
The above provides a nuanced theoretical view. GD inductive bias is probably better unless your model is misspecified
The challenge is that such an approach almost certainly requires a model with RLHF post-training, but this needs to be done in the pre training phase. But with infinity compute, this isn't an issue - you simply do the post-training many times.
If high-quality training data becomes the real bottleneck, then the interesting question is how much signal you can extract from the same dataset when compute is cheap.
At the very least, track it. How will the project maintainers instrument this?
> Directions we think are wide open ... Curriculum learning
BabyLM and offshoot published a pretty convincing body of work on exactly that (which suggests it's not particularly relevant to LM training).
As I read your page, I really felt like the brevity-thoroughness tradeoff went the wrong way.
The conventional recipe for maximizing model accuracy is to (1) train multiple models with various hyperparameters and (2) pick the individual model which performs best on a held-out validation set, discarding the remainder. In this paper, we revisit the second step of this procedure in the context of fine-tuning large pre-trained models, where fine-tuned models often appear to lie in a single low error basin. We show that averaging the weights of multiple models fine-tuned with different hyperparameter configurations often improves accuracy and robustness. Unlike a conventional ensemble, we may average many models without incurring any additional inference or memory costs -- we call the results "model soups."
[1]: https://arxiv.org/abs/2203.05482
Ensembling makes multiple copies of the entire model, trains them independently on the same task, and then has every copy contribute to the output.
Reducing computation vs. increasing it; operating at per-layer granularity vs. whole model; specialization vs. redundancy.
This isn't even a new problem where a good cultural solution hasn't been figured out yet. Reddit has had bot etiquette for years.