I call it Tradcoding. Not using AI for anything. (You just copy-paste from StackOverflow, as our forefathers once did ;)
I also have two levels "beneath" vibe coding:
- Power Coding: Like power armor, you describe chunks of code in English and it's built. Here you outsource syntax and stdlib, but remain in control of architecture and data flow.
- Backseat Coding: Like vibe coding but you keep peeking at the code and complaining ;)
- Vibe Coding: Total yolo mode. What's a code?
mikepurvis 1 days ago [-]
I feel like this distinction isn't made often or clearly enough. AI as a superpowered autocomplete is far more useful to me than trying to one-shot entire programs or modules.
flir 14 hours ago [-]
Agreed. I'd also add that I have varying levels of watchfulness: paid work I inspect (and understand) every line, and iterate. JS for my blog, I inspect. Throwaway scripts, I skim.
derefr 17 hours ago [-]
I dunno; I think Tradcoding would go beyond regular modern coding, and rather imply some kind of regressive Nara Smith "first grind and sift the flour in your kitchen"-style programming.
No Internet connection, no cache of ecosystem packages, no digitized searchable reference docs; you sit in a room with a computer and a bookshelf of printed SDK manuals, and you make it work. I.e. the 1970s IBM mainframe coding experience!
bombcar 17 hours ago [-]
This isn't terribly far from "Knuth-coding" to call it something - imagining the program in WEB in its purest form and documenting what it does, almost irregardless of the actual programing language and how it is done.
andai 9 hours ago [-]
I did something kinda like that when I realized I worked way better when I disconnected my internet. So I had to download documentation to use offline. Quite refreshing honestly.
Not necessarily more efficient, but it feels healthier and more rewarding.
supriyo-biswas 17 hours ago [-]
If you have a good stdlib (which in my case would mean something like Java for its extensive data structures) Tradcoding is entirely possible.
andai 10 hours ago [-]
There's also the popular Tab Complete, which is roughly on the Power Coding level.
amdivia 22 hours ago [-]
I'm waiting for someone to polish a well thought out interface for power coding
salomonk_mur 18 hours ago [-]
The process that Get Shit Done forces is pretty good, with Claude Code as the interface.
Harness/fill in the gaps coding: you define a bunch of tests/reference output/validation procedures, let the AI spin until all lights are green
Svoka 19 hours ago [-]
I think 'tab coding' should be a distinct group. When wast majority of the code just written through accepting autosuggestions.
djha-skin 1 days ago [-]
I would probably just call it hand coding, as we say we use hand tools in wood working. Many do this for fun, but knowing the hand tools also makes you a better woodworker.
It's an interesting question: Will coding turn out to be more like landscaping, where (referring to the practice specifically of cutting grass) no one uses hand tools (to a first approximation)? Or it will it be more like woodworking, where everyone at least knows where a Stanley hand plane is in their work shop?
rnimmer 1 days ago [-]
Can't wait to sell my artisinal hand-crafted software at the farmer's market.
Humor aside, long-handed programming is losing its ability to compete in an open market. Automate or be left behind. This will become increasingly true of many fields, not just software.
bandrami 17 hours ago [-]
Is it though? Really? I'm still waiting for even one of my vendors, commercial or open source, to actually speed up their release cadence.
spppedury 15 hours ago [-]
VS Code Insiders is releasing 3 times a day.
Used to be at most once.
bandrami 15 hours ago [-]
That's actually a great point: judging by the dev team's commits at work there's an unprecedented amount of code being committed but it's not actually making it into releases any faster. Maybe the same thing is happening at my various vendors, but then that kind of argues against the idea that Everything Has Just Changed.
the__alchemist 1 days ago [-]
> “Autonomous Proxies for Execration, or APEs,” Pluto said.
> “By typing in a few simple commands, I can spawn an arbitrary number of APEs in the cloud,” Pluto said.
> “I have hand-tuned the inner loops to the point where a single APE can generate over a megaBraden of wide-spectrum defamation. The number would be much larger, of course, if I didn’t have to pursue a range of strategies to evade spam filters, CAPTCHAs, and other defenses.”
“Have you tried this out yet?” Corvallis asked.
“Not against a real subject,” Pluto said. “I invented a fictitious subject and deployed some APEs against it, just to see how it worked in the wild. The fictitious subject has already attracted thousands of death threats,” he added with a note of pride.
“You mean, from people who saw the defamatory posts seeded by the APEs and got really mad at this person who doesn’t even exist.”
pixl97 7 hours ago [-]
Honestly this is a wonderful real life strategy.
Make a fictitious subject with all the traits of the person you really want to attack (Subject X). Have your social media bots attack Subject X. Anger spillover on social media will begin attacking your true target by trait association. The real target will have a difficult to impossible time coming at you via legal channels as there is no direct association.
jjkaczor 12 hours ago [-]
Bravo - slow-clap! (This book predicted many things about the current state of the world...)
delichon 1 days ago [-]
I am ape writing this post after ape cooking breakfast, and then I'll go for an ape walk. In the future, maybe by Thursday, I can have agents do all of that and relax.
robby_w_g 1 days ago [-]
> In the future, maybe by Thursday, I can have agents do all of that and relax.
Wall-E seems like it’s getting closer to reality every day
pbohun 1 days ago [-]
It's not ape coding. It's skill coding. People who don't have the skill to do math and logic ask others to do it for them.
The reason we have programming languages is the same reason we have musical notation or math notation. It is a far more concise and precise way of communicating than using natural languages.
We could write music using natural language, but no one does because a single page of music would require dozens of pages of natural language to describe the same thing.
boomskats 1 days ago [-]
It's funny that you mention music and notation: sheet music is very compact for musical absolutes like pitch/rhythm/harmony, but a huge part of what we care about with music is nuance, which doesn't reduce cleanly to symbols. Hence there are plenty of words in musical notation that try to describe the desired characteristics of performance, that can't be otherwise encoded into that notation. For example, "with feeling".
That reminds me of an argument on here a while back: where I said I wished Spotify let you filter tracks by presence of pitch-correction or autotune. This wasn't because I thought autotune was 'bad' or modern artists were 'fake', but because sometimes I wanted to listen to vocals as a raw performance - intonation, stability, phrasing - I wanted the option of listening to recordings that let me appreciate the _skill_ possessed by the artists that recorded them.
I got _absolutely destroyed_ in that comments section, with people insisting i'm a snob, that I'm disrespectful, bigoted towards modern artists, there's no way i can actually hear the difference, and if i cant why does it even matter, and anyway everyone uses it now because studio time is expensive and it's so much cheaper than trying to get that perfect take. People got so angry, I got a couple of DMs on Twitter even. All the while I struggled to articulate or justify why I personally value the _skill_ of exceptional raw vocal performance - what I considered to be performance "with feeling".
But, I had to come to terms with the fact that anyone can sing now - no-one can tell the difference, so the skill generally isn't valued any more. Oh, you spent your entire life learning to sing? You studied it? Because you loved music? Sorry dude, I dunno what to say. I guess you'll have to find another way to stand out. You could try losing some weight. Maybe show some skin.
pmg101 18 hours ago [-]
Self evidently not the case, look at people absolutely falling over themselves to pay hundreds for seats at West End/Broadway shows just to see the spectacle of live human performance.
globular-toast 16 hours ago [-]
Actually, learning to sing was never really valued. Anyone can learn to sing, but for most that means being a backing singer. Being a lead/soloist is more about timbre and presence (including to a not insignificant extent looks). It's something you either have or you don't.
skeledrew 1 days ago [-]
> It is a far more concise and precise way of communicating than using natural languages.
No. We have programming languages because reading and writing binary/hexadecimal is extremely painful to nigh on impossible for humans. And over the years we got better and better languages, from Assembly to C to Python, etc. Natural language was always the implicit ultimate goal of creating programming languages, and each step toward it was primarily hindered by the need to ensure correctness. We still aren't quite there yet, but this is pretty close.
globular-toast 14 hours ago [-]
No, that is not true at all.
Natural language is natural because it's good for communicating with fellow humans. We have ways to express needs, wants, feelings, doubts, ideas etc. It is not at all "natural" to program a computer with the same language because those computers were not part of the development of the language.
Now, if we actually could develop a real natural language for programming that would be interesting. However, currently LLMs do not participate in natural language development. The development of the language is expected to have been done already prior to training.
Invented languages and codes are used everywhere. Chemical nomenclature, tyre sizes, mathematics. We could try to do that stuff in "natural" language, but it would be considered a serious regression. We develop these things because they empower us to think in ways that aren't "natural" and free our minds to focus on the problem at hand.
skeledrew 9 hours ago [-]
Natural languages are "natural" because they evolved as the de facto way for humans to communicate. Doesn't need to be with fellow humans, but humans were all we've been able to communicate with over our ~300,000 years of existence as a species. And we've done it in thousands of varieties.
> currently LLMs do not participate in natural language development
It's quite literally what LLMs are trained on. You create the core architecture, and then throw terabytes of human-generated text at it until a model that works with said text results. Doesn't matter if it participates in language development or not, it only matters that humans can communicate with it "naturally".
> Invented languages and codes
All languages are invented; the only difference is how conscious and deliberate the process was, which is a function of intended purpose. Just look at Esperanto. Or Valyrian.
globular-toast 8 hours ago [-]
A natural language is a living thing. Every day each speaker adjusts his model a tiny bit. This has advantages but also some serious disadvantages which is why technical writers are very careful to use only a small subset of the language in their writing.
For true natural language programming we'd need to develop a language for reliably describing programs, but this doesn't exist in the language, so why would it exist in the LLM models? It will never exist, unless we invent it, which is, of course, exactly what programming languages are.
Natural languages are not invented. Written scripts are said to be invented, but nobody says a natural language like English or French is invented. It just happened, naturally, as the name suggests.
If natural language were the end goal then mathematics and music would use it too. There's nothing stopping them.
skeledrew 4 hours ago [-]
> For true natural language programming we'd need to develop a language for reliably describing programs
We really don't. Eventually we won't even be programming anymore per se. Consider communicating with someone who isn't fluent in any language you know, and vice versa. In the beginning you need to use a pretty restricted vocabulary set so you understand each other, similar to a programming language. But over time as communication continues, that vocabulary set grows and things become increasingly "natural", and it's easier for you to "program" each other.
Same with LLMs. We just need to get to the point where a model has sufficient user context (as it already has all the vocabulary) for effective communication. Like OpenClaw is currently accessing enough context for enough use cases that its popularity is through the roof. Tell it to do something, and as long as it has access to the relevant tools and services, it just gets it done. All naturally.
justinhj 1 days ago [-]
This is why I never use a calculator. Since my school days I have the skill
to do long division. Why hit the sin button when I have the skill to write out a Taylor series expansion?
For many other purposes I have the skill to use Newton Raphson methods to calculate values that mostly work.
Those who use a calculator simply don't have these skills.
aqua_coder 1 days ago [-]
There is a notable difference between say, calculating long division through a calculator compared to prompting an AI to calculate the derivative of a simple continuous function. one requires _understanding_ of the function, while the other just skips the understanding and returns the required derivative.
One is just a means to skip labor intensive and repetitive actions, while the other is meant to skip the entire point of _why_ you are even calculating in the first place. What is the point of dividing two numbers if you don't even understand the reason behind it ?
aspenmartin 1 days ago [-]
I'm not quite sure I understand the logic of this and how people don't see that these claims of "well now everyone is going to be dumber because they don't learn" has been a refrain literally every time a major technological / Industrial Revolution happens. Computers? The internet? Calculators?
The skills we needed before are just no longer as relevant. It doesn't mean the world will get dumber, it will adapt to the new tooling and paradigm that we're in. There are always people who don't like the big paradigm change, who are convinced it's the end of the "right" way to do things, but they always age terribly.
I find I learn an incredible amount from using AI + coding agents. It's a _different_ experience, and I would argue a much more efficient one to understand your craft.
justinhj 1 days ago [-]
100%. I have been learning so much faster as the models get better at both understanding the world and how to explain it me at whatever level I am ready for.
Using AI as just a generator is really missing out on a lot.
rcxdude 13 hours ago [-]
Integration and differentiation, even before LLMs, were already something that you would be better off just getting a machine to do in most cases. It's far more important to understand what the operations represent than it is to derive the exact closed form of the result yourself, because the actual process of doing it is almost always tedious and mechanical and doesn't give you much insight into the equation you are working with.
RaftPeople 1 days ago [-]
> This is why I never use a calculator.
I always use the calculator.
But, because the numbers that get returned aren't always the right numbers, I try to approximate the answer in my head or with paper and pencil to kind of make sure it's in the ball park.
Also, sometimes it returns digits that don't actually exist, and it's pretty insistent that the digit is correct. If I catch it early I just re-run the equation but there is a special button where I can tell it that it used a digit that does not actually exist.
Sometimes, for complex ones, it tells me it's trying to calculate and provides some details about how it's going about it and keeps going and going and going, for those ones I just reboot the calculator.
lioeters 1 days ago [-]
Solution for a hallucinating calculator: get a second unreliable calculator to verify the work of the first one. This message brought to you by a trillion dollars in investment desperately trying to replace the labor force with pseudo-intelligent calculators.
Also, the calculator may refuse to process certain operation deemed to be offensive or against the interest of the corporate-state.
Not to forget, the calculator consumes so much processing power that most people are unable to run it at home, so you need a subscription service to access general-purpose calculation.
pixl97 6 hours ago [-]
>get a second unreliable calculator to verify the work of the first one
In do or die situations we actually use 3 calculators.
pbohun 1 days ago [-]
You probably also don't use a calculator because it uses a scary language called arabic numerals. Why write 123,456 when you could write out in english: One Hundred Twenty-Three Thousand Four Hundred Fifty-Six? English is your programming language and also your math language, right?
soulofmischief 1 days ago [-]
I hope this comment is sarcastic.
LLMs are able to ingest numbers. And not just Arabic numerals; Did you know that there are other kinds of number systems?
Believe it or not, they also ingest multimedia. You don't need the English language to talk to a language model. Anything can be a language; you can communicate using only images.
And for that matter, modern LLMs are great at abstract math (and like anything else the results still need proofreading).
1 days ago [-]
ssivark 1 days ago [-]
Bad analogy. The things I delegate to a calculator, I'm absolutely sure I understand well (and could debug if need be). These are also very legible skills that are easy to remind myself by re-reading the recipe -- so I'm not too worried about skills "atrophying".
skeledrew 1 days ago [-]
Meanwhile those who use a calculator merely hit that sin button and get on with the actual problem at hand, and life in general.
Strongly suspect this is sarcasm, but if it isn't, I applaud your... gusto? Or whatever it is you have going on here.
pmg101 18 hours ago [-]
It's not sarcasm, it's satire.
skeledrew 17 hours ago [-]
That's the right word, thanks.
slekker 1 days ago [-]
[flagged]
waygtdai 1 days ago [-]
[dead]
xeonmc 13 hours ago [-]
People in this thread discussing the merits of the satire seriously are completely missing the joke over their head that the entire thing was meant to be just a setup for the Rewrite It In Rust punchline.
I think this is going to be very prescient! Just as Baristas died out once we got machines that could make coffee from powders.
lioeters 1 days ago [-]
People also stopped ape-walking after the invention of the bicycle.
pmg101 18 hours ago [-]
I mean it is true to say that most people in the West now use a car for transport, and walking has become more of a leisure pursuit (rebranded as "hiking") rather than a practical necessity.
makapuf 16 hours ago [-]
My car is typically used twice a week and (like many others) I mostly ride my bike or walk. I'm not special at all and I certainly use the car, but it has not replaced walking.
pmg101 10 hours ago [-]
I don't think either of us disagree though that the number of miles of non-leisure journeys walked per capita is significantly less in 2026 than it was in 1926 or 1826 though?
I always found it pretty remarkable in David Copperfield when Dickens recounts regular walks between London and Canterbury, which he apparently did make in real life
pixl97 6 hours ago [-]
Hence why one persons behavior is called an anecdote.
lateforwork 1 days ago [-]
> The central view of ape coding proponents was that software engineered by AIs did not match the reliability of software engineered by humans
That's not the reason to do ape coding. AI generated code is not innovative. If you want to build something that no one has built anything similar to then you have to ape code.
That's just not true. It's like saying compiled code couldn't be innovative, that the only innovative code is assembly. People used to say stuff like that too, in their fear of being replaced. There's nothing new under the sun, I guess [double entendre].
ylee 1 days ago [-]
>The main value of modern ape coding appears to be recreational. Ape coders manifest high levels of engagement during coding sessions and report feelings of relaxation after succeeding in (self-imposed) coding challenges. Competitive ape coding is also popular, with top ranked ape coders being relatively well-known in their communities.
I have never been paid to write code, and my formal CS education is limited to AP Computer Science, and a one-credit Java class in college.
I wrote 20 years ago a backup script implementing Mike Rubel's insight <http://www.mikerubel.org/computers/rsync_snapshots/> about using `rsync` and hard links to create snapshots backups. It's basically my own version of `rsnapshot`. I have deployed it across several of my machines. Every so often I fix a bug or add a feature. Do I need to do it given `rsnapshot`'s existence? No. Is it fun to work on it? Yes.
(I've over the years restored individual files/directories often enough from the resulting backups to have reasonable confidence in the script's effectiveness, but of course one never knows for certain until the day everything gets zapped.)
jayd16 1 days ago [-]
It's pretty strange to me that we imagine a world where AI can handle every problem but we still talk about code. It's like how the Jetson's had bulky TVs.
You don't talk about all the assembly high level languages make, or at least it's no longer how people view things. We don't say "look at this assembly I compiled." Instead the entire concept fades to the back.
pixl97 6 hours ago [-]
The issue is you're measuring this statistic incorrectly.
If you look at the per capita number of people talking about assembly when looking at all the people on the planet it's highly likely there are more people looking at assembly now then whenever your back then was. Programmers simply where a tiny part of the population back then.
Each time we make coding easier and more high level we invite more programmers into the total pool.
moregrist 1 days ago [-]
> You don't talk about all the assembly high level languages make, or at least it's no longer how people view things.
Speak for yourself. I routinely look at assembly when worrying about performance, and occasionally drop into assembly for certain things. Compilers are a tool, not a magic wand, and tools have limits.
Much like LLMs. My experience with Claude Code is that it gets significantly worse the further you push it from the mean of its training set. Giving it guidance or writing critical “key frame” sections by hand keep it on track.
People who think this is the end of looking at or writing code clearly work on very different problems than I do.
mvanbaak 1 days ago [-]
> You don't talk about all the assembly high level languages make, or at least it's no longer how people view things. We don't say "look at this assembly I compiled." Instead the entire concept fades to the back.
Some still do. Os and compiler devs to name a few
makapuf 16 hours ago [-]
And people looking at code may well be as numerous as people looking at assembly. (Which I even like to do)
avaer 1 days ago [-]
"Aping in" in crypto means (meant?) buying crypto without doing any research.
I know it's not what the thought piece is about, but it's equally accurate to say engineers are "aping in" on AI coding without doing any research. Very much the same vibe, my anti-AI friends suddenly flipped their tune to shill slopped together apps.
I expect it to go about as well as it did in crypto.
umairnadeem123 1 days ago [-]
NGL the flaw in this piece is the same flaw in every "AI will replace X" argument - it assumes the bottleneck was ever the typing.it wasn't. the bottleneck is knowing what to build and why.I use AI agents for probably 80% of my code output now and IMO i'm more productive than ever, but only because i spent years "ape coding" first and can immediately tell when the agent is heading somewhere stupid. the people i see struggling with AI coding are exactly the ones who skipped that part. tbf the calculator analogy that keeps coming up in this thread is backwards - nobody is arguing you don't need to understand math to use a calculator. that's literally the point. you DO need to understand it, which is why "ape coding" isn't going away as some niche hobby.it's the prerequisite
amelius 1 days ago [-]
So we are apes now?
It's so great to be alive in this time of of dehumanizing AI.
That looks great. I want to read it! The techno caste system reminds me of Lord of Light.
freetonik 11 hours ago [-]
One of my favorite books, and definitely part of the inspiration.
ghm2199 1 days ago [-]
I would call it code-plumber. It's like a plumber who are today socio-economocally very distinct from architects, civil and structural engineers.
They will have very narrow to zero understanding — don't need it to fix — of shear forces, navier stokes.
They will command high rates if labor is limited(a plumber in Indonesia will commande lower ppp adjusted hourly rates than America). CS education become a subset of applied math since graduate hiring of code-plumber will require a narrower certificate to fix an AI system — which works very much like how plumber working to fix a building leak is different from a person fixing a water pipe burst under a road.
A few AI systems will become dominant, That should be a mix of Anthropics and your Googles. They will hire code plumbers to plumb together all the things they provide.
You don't have to use much brain at all as a code-plumber. You become a remote journeyman logging in and plumbing with given tools, making sure there is low back pressure(a term where load on future plumbers interacting/fixing with ai decreases) and the like.
0xcafefood 1 days ago [-]
I can't tell if yourr comparison to plumbers who don't understand theory (Navier-Stokes) is supposed to apply to "ape coders" who write code by hand or to "vibe coders" who outsource their understanding.
thorum 1 days ago [-]
Ape thinking is a cognitive practice where a human deliberately solves problems with their own mind. Practitioners of ape thinking will typically author thoughts by thinking them with their own brain, using neurons and synapses.
The term was popularized when asking a computer to do it for you became the dominant form of cognition. "Ape thinking" first appeared in online communities as derogatory slang, referring to humans who were unable to outsource all their thinking to a computer. Despite the quick spread of asking a computer to do it for you, institutional inertia, affordability, and limitations in human complacency were barriers to universal adoption of the new technology.
masswerk 1 days ago [-]
The slogan of ape thinking (deliberately adjusted for machine readability): "Not AI, not machine generated slop <em-dash> genuine human intelligence."
kazinator 15 hours ago [-]
Idiots; of course the language that the 𒀯 compiler should be in had better be 𒀯, or else it is a toy language.
g9yuayon 1 days ago [-]
I like the Chinese alternative better: 古法编程. It feels like playful self-deprecation, suggesting old-school, handcrafted coding with a wink.
Ape coding sounds harsher and more insulting, implying mindless or sloppy work rather than humor.
AnimalMuppet 1 days ago [-]
"Ancient programming", for those who don't read Chinese. (I had to use Google Translate.)
patrickmay 1 days ago [-]
If you're selling Ape Coding merchandise, send me the link!
mvanbaak 1 days ago [-]
I am reading the comments to see if there’s a merch shop haga
msteffen 1 days ago [-]
I liked this a lot in retrospect.
I really like to understand the practice of software engineering by analogy to research mathematics (like, no one ever asks mathematicians to estimate how long it will take to prove something…).
Something I think software engineers can take from math right now: years of everyone’s math education is spent doing things that computers have always been able to do trivially—arithmetic, solving simple equations, writing proofs that would just be `simp` in Lean—and no one wrings their hands over it. It’s an accepted part of the learning process.
blurbleblurble 1 days ago [-]
Who's still here in 2026
BrianDGLS92 1 days ago [-]
This made me lol
gwern 1 days ago [-]
Naturally, when hobbyists spend a lot of time in a single block to get results (as they are unable to parallelize or meaningfully coordinate over multiple invocations of themselves, due to lacking key cognitive capabilities such as embeddings), they refer to it as 'going ape'.
theusus 1 days ago [-]
AI can produce thousands of line of code. But that’s not the goal.
segmondy 1 days ago [-]
What is the goal?
saghm 1 days ago [-]
Producing code that does what's intended. The metric is fuzzy and based on the usage of the software, not the scale of lines of code. The extent of the importance of the code itself is that I'm practice software tends not to be "one and done", so you need to be able to go back and modify it to fix bugs, add features, etc., and it turns out that's usually hard when the code is sloppy. Those needs still should stem from the sandal actual user experience though, or else we've lost the plot by treating the mechanism as the goal itself
shepherdjerred 1 days ago [-]
Would my user rather have a program that works 100% in 2 weeks, or a program that works 80% in one day?
When the user needs a change made, would they prefer I spend another two weeks extending my perfect program, or throw a few LLMs at their sloppy code and have it done in a day?
saghm 18 hours ago [-]
That would depend on who your users are and what they're using their program for. My point is that the context of who is using the program, how they're using it, and what they're using it for are what actually matters, because most of the time, software that no one uses is by definition useless. There are circumstances where that might not apply, like code used as part of education or training (whether in a formal course or a project someone writes specifically because they're trying to learn something from the process) or when the purpose is aesthetic or humorous, but I'd argue that whatever process makes sense for them doesn't necessarily bear any resemblance given how different the goals are.
D-Machine 17 hours ago [-]
Try this argument in any field where the stakes are high (e.g. medical diagnostics) and see how far this gets you.
This is really basic decision-theory stuff, often the cost of an error is far greater than the benefit of correctness.
g-b-r 22 hours ago [-]
You're really asking if a user would want a program that fails a fifth of the times?
In some cases it might be better to have some crap right away and more cheaply, but even you would probably not like a 20% failure rate in most of the software you use.
sph 1 days ago [-]
What is art? What is the point of anything? Why write code instead of eating bananas all day? There is no answer to your question.
hparadiz 1 days ago [-]
If everything is C why not generate the entire bootloader to kernel stack with programs specifically tailored to the user.
saulpw 1 days ago [-]
Tokens cost money; bugs get created and have to be found and fixed. Mature software has been more valuable than new software since 1946.
hparadiz 22 hours ago [-]
You sound like someone in 1992 explaining to me why the internet will never be useful.
saulpw 17 hours ago [-]
And you sound like someone in 1995 saying "everyone will have their own webpage!" which is kind of true but also note how many people actually make their own custom website. Even for people who do have their own "custom" website, they usually use WordPress or an existing static-site generator with a theme they found and tweaked a little.
hparadiz 9 hours ago [-]
Websites are manually put together things. I'm talking about something that runs automatically in the background and you don't even think about it. Our computers today are 10,000 times faster than what we had in 1990. Now extrapolate into the future. 100 years from now someone will be able generate a custom OS at today's level as a toy in minutes or less.
pixl97 6 hours ago [-]
The speed of light disagrees with you unless subatomic computing pops up at some point.
Or to put this a different way, they won't be developing a whole cloth custom OS but more like customizing a linux kernel to do what they want. The reason is there is some minimum of problem space exploration required, hence entropy generation needed, to understand the interactions of the hardware and it's limitations.
Hence if you wanted to run this 10,000,000X faster computer in the future to do what you are saying, it would explode like a supernova with the energy concentration required to do it quickly.
TL;DR it's a few trillion times more energy efficient not to do this.
hparadiz 3 hours ago [-]
It's amusing to me that folks think nanometer size is the only way to increase density. Also your math is quite a bit off. 10,000 times more compute than a current average desktop is only 10,000 times more energy ... today. Let alone decades from now.
blurbleblurble 21 hours ago [-]
I find it hilarious that we needed a [fiction] indicator, friends.
remix2000 14 hours ago [-]
I shall now drive my fart car back to my cozy meatcave from the public meatspace so that I can do some good old ape coding with my smelly carbon-based friends in peace.
YarickR2 1 days ago [-]
Every joke has a bit of a joke, as they say. I'm proudly ape-coding two of my current projects.
BrianDGLS92 1 days ago [-]
> Everything in this website was written by a human
tshaddox 1 days ago [-]
I’m a fan of the term “human slop,” which I’ve seen pop up recently regarding certain tech company feuds on Twitter.
effdee 1 days ago [-]
I prefer the term "classic coding".
shepherdjerred 1 days ago [-]
I think this piece makes an excellent point.
Maybe the LLMs today are deeply flawed and cannot replace programmers. But, one day, LLMs (or some other AI approach) _will_ be successful in replacing programmers. It might not be this year or the year after.
I do however feel pretty confident in saying that there will be few programmers in 2076. This piece will look quite prescient.
It's just like how we say "can you imagine programming on a punchcard?"
Dansvidania 1 days ago [-]
i don't understand the stance of the post and it being the first in the blog (congrats on getting this hot on your first post) I am unable to further investigate.
Is it sci-fi like writing from the perspective of a future person?
It sounds like someone trying to make assumption sounds as fact. Not a fan.
Philpax 1 days ago [-]
It is presented as a Wikipedia article from the future describing a subculture of tomorrow. See also https://qntm.org/mmacevedo for another example of this genre.
Jolter 1 days ago [-]
I assume you didn’t make it to the last paragraph, where they put the punchline.
g-b-r 21 hours ago [-]
That "punchline" seems just a final argument in support of the thesis (that manual coding is becoming absurd, and only people as dumb as apes will insist on doing it).
raxskle 1 days ago [-]
The merits and demerits of this product vary from person to person, and I dare not make a definite assertion
globular-toast 16 hours ago [-]
Why do people think "agent coding" is a skill? There is not a single programmer who is "unable to program with agents". It's like saying Albert Roux was unable to heat up a ready meal in a microwave.
layer8 1 days ago [-]
Arguably, it’s the LLMs that are doing the aping, and hence the ape-coding.
In that picture, aping is probably a step up from stochastic parroting.
satisfice 1 days ago [-]
It’s known as hand-coding. We’ve had this term for many years.
yomismoaqui 1 days ago [-]
So, are we doing fan-fiction now?
xyzsparetimexyz 10 hours ago [-]
Oh fuck off.
lioeters 9 hours ago [-]
Flings shit
g-b-r 1 days ago [-]
This is meant to insult AI skeptics, let's not pretend to be idiots.
It should be flagged and taken down.
ylee 1 days ago [-]
> This is meant to insult AI skeptics, let's not pretend to be idiots.
Only an idiot would read the piece in that way.
>It should be flagged and taken down.
Even if it really did "insult AI skeptics" (and, again, no one with any reasonable ability to comprehend wit and satire would take it that way), how is that justification to get it "flagged and taken down"?!?
g-b-r 22 hours ago [-]
Well, one of us is probably that indeed.
> how is that justification to get it "flagged and taken down"?!?
Posts that are likely to result in flames generally are.
rmsaksida 1 days ago [-]
That is not what this is meant as.
g-b-r 22 hours ago [-]
Do explain what you meant it to be.
unconed 1 days ago [-]
>Despite the quick spread of agentic coding, institutional inertia, affordability, and limitations in human neuroplasticity were barriers to universal adoption of the new technology.
Blaming lack of adoption purely on regressive factors follows the same frame that AI firms set. It isn't very effective satire for that reason.
It couldn't be that there is something essential and elementary that is wrong with the output, no... all these experienced experts are just troglodites and wrong and we should instead tag along with the people who offloaded the parts of their work they found tough to a machine the first chance they got.
There's no such thing as ape coding. There's still just coding, and vibe coding.
adrian-vega 6 hours ago [-]
[dead]
throwaway613746 1 days ago [-]
[dead]
g-b-r 1 days ago [-]
[flagged]
serious_angel 1 days ago [-]
[flagged]
nehal3m 1 days ago [-]
I don’t think it was meant that seriously. I read it as a humorous fiction written as if in the future, and I thought it was funny. Even speaking as a primate.
bitexploder 1 days ago [-]
When someone so clearly misses an article written tongue in cheek and uses personal insults to let us know they missed the point, one begins to wonder. Apes code together. Apes stronger together. Return to monke.
g-b-r 1 days ago [-]
The point was to called AI skeptics apes, and you probably know it well.
lyu07282 1 days ago [-]
Why has nobody mentioned yet how dangerous this really is? Have we all forgotten the great Datacenter burnings of 2031? The APEs are one step away from becoming fully fledged Luddite terrorists. Artisanal software is unamerican just like President Barron said the other day on his Twitch stream.
samoit 1 days ago [-]
I always thought that ape coding is what we call vibe-coding nowadays. Maybe the write of the article (maybe an ai generated blog?) misunderstood the terms.
gas9S9zw3P9c 1 days ago [-]
"Humans are now writing code in strict specification language so that AI agents have completely context and don't mistakes. This specification language is called C' and has led to a whopping 20% reduction of code. 1000 of C++ code can be expressed in no more than 800 lines of specification C' code written by humans"
hanifbbz 1 days ago [-]
WTF is this?! Sattire? AI generated propaganda? I honestly don't get it. Can OP elaborate why it's a good content worthy of people’s time? Thanks in advance.
rmsaksida 1 days ago [-]
It's fiction. I did not use AI to write it. On whether it's worthy of people's time... well, I'm not presumptuous enough to say. :)
jshmrsn 1 days ago [-]
I enjoyed reading it. Whether one believes the future will look like this fictional/hypothetical one, it encourages the reader to think about what would need to become true for this future to be plausible.
serious_angel 1 days ago [-]
Who knows? 5 people? 10? Only those who actually read it, and still not sure. Did they read it? Or did they also believe it's written by AI? I tried to believe it's written by a human when noticed its footer's note. It was hard to believe knowing my fear of today's trends, where many read is an empty dark where human time is voided. Yet, what is the main idea behind it then, nowadays, when just a few will actually read it?
Considering, how some modern attitude works for certain people, and how much power of trends and socials may offer, such terms get boosted over... and you just hope and keep believing in people...
Related: https://medium.com/@nathanladuke/b56da64a09ee (To Those Who Comment Their Opinion Without Reading the Whole Story... I was shocked at how many people simply read the title and then posted their opinion on the whole article...)
rmsaksida 1 days ago [-]
Yes, I understand what you're saying perfectly. And I had similar thoughts while I was writing this. I do not want to talk too much about the process of writing it, or the content itself, because I feel it's not right for me (the author) to talk about it. But I'd like to make it clear that I wrote this myself, and that many of the questions and points people have raised here have also been in my mind, and it was my intention to elicit this type of thinking. Thank you and all others for the comments - I really appreciate it, even the very negative ones. This is the first time I published something online and I'm very happy that it resonated with people.
jjcc 1 days ago [-]
Ape writing? (kidding)
AreShoesFeet000 1 days ago [-]
[flagged]
dang 1 days ago [-]
Please don't cross into personal attack, no matter how wrongheaded another comment is or you feel it is.
Anyone else bothered by ape being overwhelmingly derogatorily slang for black people?
Seems like it's a doubly offensive term.
Are there better terms, less encumbered by bigotry, while still covering the "meat space" quality to this development approach?
blurbleblurble 23 hours ago [-]
Racism sucks and I'm bothered by it tremendously. For example the dog whistles in bored ape yacht club were obnoxious to say the least. But I don't think this is that. This is a silly satire on the ways people are getting tripped up on a fallacy, taking the concept of "ai" as being an autonomous force separable from people way too seriously. It's not of course. It's another iteration of the same old tools.
627467 1 days ago [-]
[flagged]
Rendered at 23:00:37 GMT+0000 (Coordinated Universal Time) with Vercel.
I also have two levels "beneath" vibe coding:
- Power Coding: Like power armor, you describe chunks of code in English and it's built. Here you outsource syntax and stdlib, but remain in control of architecture and data flow.
- Backseat Coding: Like vibe coding but you keep peeking at the code and complaining ;)
- Vibe Coding: Total yolo mode. What's a code?
No Internet connection, no cache of ecosystem packages, no digitized searchable reference docs; you sit in a room with a computer and a bookshelf of printed SDK manuals, and you make it work. I.e. the 1970s IBM mainframe coding experience!
Not necessarily more efficient, but it feels healthier and more rewarding.
https://github.com/gsd-build/get-shit-done
It's an interesting question: Will coding turn out to be more like landscaping, where (referring to the practice specifically of cutting grass) no one uses hand tools (to a first approximation)? Or it will it be more like woodworking, where everyone at least knows where a Stanley hand plane is in their work shop?
Humor aside, long-handed programming is losing its ability to compete in an open market. Automate or be left behind. This will become increasingly true of many fields, not just software.
Used to be at most once.
“Have you tried this out yet?” Corvallis asked.
“Not against a real subject,” Pluto said. “I invented a fictitious subject and deployed some APEs against it, just to see how it worked in the wild. The fictitious subject has already attracted thousands of death threats,” he added with a note of pride.
“You mean, from people who saw the defamatory posts seeded by the APEs and got really mad at this person who doesn’t even exist.”
Make a fictitious subject with all the traits of the person you really want to attack (Subject X). Have your social media bots attack Subject X. Anger spillover on social media will begin attacking your true target by trait association. The real target will have a difficult to impossible time coming at you via legal channels as there is no direct association.
Wall-E seems like it’s getting closer to reality every day
The reason we have programming languages is the same reason we have musical notation or math notation. It is a far more concise and precise way of communicating than using natural languages.
We could write music using natural language, but no one does because a single page of music would require dozens of pages of natural language to describe the same thing.
That reminds me of an argument on here a while back: where I said I wished Spotify let you filter tracks by presence of pitch-correction or autotune. This wasn't because I thought autotune was 'bad' or modern artists were 'fake', but because sometimes I wanted to listen to vocals as a raw performance - intonation, stability, phrasing - I wanted the option of listening to recordings that let me appreciate the _skill_ possessed by the artists that recorded them.
I got _absolutely destroyed_ in that comments section, with people insisting i'm a snob, that I'm disrespectful, bigoted towards modern artists, there's no way i can actually hear the difference, and if i cant why does it even matter, and anyway everyone uses it now because studio time is expensive and it's so much cheaper than trying to get that perfect take. People got so angry, I got a couple of DMs on Twitter even. All the while I struggled to articulate or justify why I personally value the _skill_ of exceptional raw vocal performance - what I considered to be performance "with feeling".
But, I had to come to terms with the fact that anyone can sing now - no-one can tell the difference, so the skill generally isn't valued any more. Oh, you spent your entire life learning to sing? You studied it? Because you loved music? Sorry dude, I dunno what to say. I guess you'll have to find another way to stand out. You could try losing some weight. Maybe show some skin.
No. We have programming languages because reading and writing binary/hexadecimal is extremely painful to nigh on impossible for humans. And over the years we got better and better languages, from Assembly to C to Python, etc. Natural language was always the implicit ultimate goal of creating programming languages, and each step toward it was primarily hindered by the need to ensure correctness. We still aren't quite there yet, but this is pretty close.
Natural language is natural because it's good for communicating with fellow humans. We have ways to express needs, wants, feelings, doubts, ideas etc. It is not at all "natural" to program a computer with the same language because those computers were not part of the development of the language.
Now, if we actually could develop a real natural language for programming that would be interesting. However, currently LLMs do not participate in natural language development. The development of the language is expected to have been done already prior to training.
Invented languages and codes are used everywhere. Chemical nomenclature, tyre sizes, mathematics. We could try to do that stuff in "natural" language, but it would be considered a serious regression. We develop these things because they empower us to think in ways that aren't "natural" and free our minds to focus on the problem at hand.
> currently LLMs do not participate in natural language development
It's quite literally what LLMs are trained on. You create the core architecture, and then throw terabytes of human-generated text at it until a model that works with said text results. Doesn't matter if it participates in language development or not, it only matters that humans can communicate with it "naturally".
> Invented languages and codes
All languages are invented; the only difference is how conscious and deliberate the process was, which is a function of intended purpose. Just look at Esperanto. Or Valyrian.
For true natural language programming we'd need to develop a language for reliably describing programs, but this doesn't exist in the language, so why would it exist in the LLM models? It will never exist, unless we invent it, which is, of course, exactly what programming languages are.
Natural languages are not invented. Written scripts are said to be invented, but nobody says a natural language like English or French is invented. It just happened, naturally, as the name suggests.
If natural language were the end goal then mathematics and music would use it too. There's nothing stopping them.
We really don't. Eventually we won't even be programming anymore per se. Consider communicating with someone who isn't fluent in any language you know, and vice versa. In the beginning you need to use a pretty restricted vocabulary set so you understand each other, similar to a programming language. But over time as communication continues, that vocabulary set grows and things become increasingly "natural", and it's easier for you to "program" each other.
Same with LLMs. We just need to get to the point where a model has sufficient user context (as it already has all the vocabulary) for effective communication. Like OpenClaw is currently accessing enough context for enough use cases that its popularity is through the roof. Tell it to do something, and as long as it has access to the relevant tools and services, it just gets it done. All naturally.
Those who use a calculator simply don't have these skills.
The skills we needed before are just no longer as relevant. It doesn't mean the world will get dumber, it will adapt to the new tooling and paradigm that we're in. There are always people who don't like the big paradigm change, who are convinced it's the end of the "right" way to do things, but they always age terribly.
I find I learn an incredible amount from using AI + coding agents. It's a _different_ experience, and I would argue a much more efficient one to understand your craft.
Using AI as just a generator is really missing out on a lot.
I always use the calculator.
But, because the numbers that get returned aren't always the right numbers, I try to approximate the answer in my head or with paper and pencil to kind of make sure it's in the ball park.
Also, sometimes it returns digits that don't actually exist, and it's pretty insistent that the digit is correct. If I catch it early I just re-run the equation but there is a special button where I can tell it that it used a digit that does not actually exist.
Sometimes, for complex ones, it tells me it's trying to calculate and provides some details about how it's going about it and keeps going and going and going, for those ones I just reboot the calculator.
Also, the calculator may refuse to process certain operation deemed to be offensive or against the interest of the corporate-state.
Not to forget, the calculator consumes so much processing power that most people are unable to run it at home, so you need a subscription service to access general-purpose calculation.
In do or die situations we actually use 3 calculators.
LLMs are able to ingest numbers. And not just Arabic numerals; Did you know that there are other kinds of number systems?
Believe it or not, they also ingest multimedia. You don't need the English language to talk to a language model. Anything can be a language; you can communicate using only images.
And for that matter, modern LLMs are great at abstract math (and like anything else the results still need proofreading).
Strongly suspect this is sarcasm, but if it isn't, I applaud your... gusto? Or whatever it is you have going on here.
See also: https://longestjokeintheworld.com/
I always found it pretty remarkable in David Copperfield when Dickens recounts regular walks between London and Canterbury, which he apparently did make in real life
That's not the reason to do ape coding. AI generated code is not innovative. If you want to build something that no one has built anything similar to then you have to ape code.
See Chris Lattner's blog where he explains the limitations of AI: https://www.modular.com/blog/the-claude-c-compiler-what-it-r...
I have never been paid to write code, and my formal CS education is limited to AP Computer Science, and a one-credit Java class in college. I wrote 20 years ago a backup script implementing Mike Rubel's insight <http://www.mikerubel.org/computers/rsync_snapshots/> about using `rsync` and hard links to create snapshots backups. It's basically my own version of `rsnapshot`. I have deployed it across several of my machines. Every so often I fix a bug or add a feature. Do I need to do it given `rsnapshot`'s existence? No. Is it fun to work on it? Yes.
(I've over the years restored individual files/directories often enough from the resulting backups to have reasonable confidence in the script's effectiveness, but of course one never knows for certain until the day everything gets zapped.)
You don't talk about all the assembly high level languages make, or at least it's no longer how people view things. We don't say "look at this assembly I compiled." Instead the entire concept fades to the back.
If you look at the per capita number of people talking about assembly when looking at all the people on the planet it's highly likely there are more people looking at assembly now then whenever your back then was. Programmers simply where a tiny part of the population back then.
Each time we make coding easier and more high level we invite more programmers into the total pool.
Speak for yourself. I routinely look at assembly when worrying about performance, and occasionally drop into assembly for certain things. Compilers are a tool, not a magic wand, and tools have limits.
Much like LLMs. My experience with Claude Code is that it gets significantly worse the further you push it from the mean of its training set. Giving it guidance or writing critical “key frame” sections by hand keep it on track.
People who think this is the end of looking at or writing code clearly work on very different problems than I do.
Some still do. Os and compiler devs to name a few
I know it's not what the thought piece is about, but it's equally accurate to say engineers are "aping in" on AI coding without doing any research. Very much the same vibe, my anti-AI friends suddenly flipped their tune to shill slopped together apps.
I expect it to go about as well as it did in crypto.
It's so great to be alive in this time of of dehumanizing AI.
https://en.wikipedia.org/wiki/Human_taxonomy#History
They will have very narrow to zero understanding — don't need it to fix — of shear forces, navier stokes.
They will command high rates if labor is limited(a plumber in Indonesia will commande lower ppp adjusted hourly rates than America). CS education become a subset of applied math since graduate hiring of code-plumber will require a narrower certificate to fix an AI system — which works very much like how plumber working to fix a building leak is different from a person fixing a water pipe burst under a road.
A few AI systems will become dominant, That should be a mix of Anthropics and your Googles. They will hire code plumbers to plumb together all the things they provide.
You don't have to use much brain at all as a code-plumber. You become a remote journeyman logging in and plumbing with given tools, making sure there is low back pressure(a term where load on future plumbers interacting/fixing with ai decreases) and the like.
The term was popularized when asking a computer to do it for you became the dominant form of cognition. "Ape thinking" first appeared in online communities as derogatory slang, referring to humans who were unable to outsource all their thinking to a computer. Despite the quick spread of asking a computer to do it for you, institutional inertia, affordability, and limitations in human complacency were barriers to universal adoption of the new technology.
Ape coding sounds harsher and more insulting, implying mindless or sloppy work rather than humor.
I really like to understand the practice of software engineering by analogy to research mathematics (like, no one ever asks mathematicians to estimate how long it will take to prove something…).
Something I think software engineers can take from math right now: years of everyone’s math education is spent doing things that computers have always been able to do trivially—arithmetic, solving simple equations, writing proofs that would just be `simp` in Lean—and no one wrings their hands over it. It’s an accepted part of the learning process.
When the user needs a change made, would they prefer I spend another two weeks extending my perfect program, or throw a few LLMs at their sloppy code and have it done in a day?
This is really basic decision-theory stuff, often the cost of an error is far greater than the benefit of correctness.
In some cases it might be better to have some crap right away and more cheaply, but even you would probably not like a 20% failure rate in most of the software you use.
Or to put this a different way, they won't be developing a whole cloth custom OS but more like customizing a linux kernel to do what they want. The reason is there is some minimum of problem space exploration required, hence entropy generation needed, to understand the interactions of the hardware and it's limitations.
Hence if you wanted to run this 10,000,000X faster computer in the future to do what you are saying, it would explode like a supernova with the energy concentration required to do it quickly.
TL;DR it's a few trillion times more energy efficient not to do this.
Maybe the LLMs today are deeply flawed and cannot replace programmers. But, one day, LLMs (or some other AI approach) _will_ be successful in replacing programmers. It might not be this year or the year after.
I do however feel pretty confident in saying that there will be few programmers in 2076. This piece will look quite prescient.
It's just like how we say "can you imagine programming on a punchcard?"
Is it sci-fi like writing from the perspective of a future person?
It sounds like someone trying to make assumption sounds as fact. Not a fan.
In that picture, aping is probably a step up from stochastic parroting.
It should be flagged and taken down.
Only an idiot would read the piece in that way.
>It should be flagged and taken down.
Even if it really did "insult AI skeptics" (and, again, no one with any reasonable ability to comprehend wit and satire would take it that way), how is that justification to get it "flagged and taken down"?!?
> how is that justification to get it "flagged and taken down"?!?
Posts that are likely to result in flames generally are.
Blaming lack of adoption purely on regressive factors follows the same frame that AI firms set. It isn't very effective satire for that reason.
It couldn't be that there is something essential and elementary that is wrong with the output, no... all these experienced experts are just troglodites and wrong and we should instead tag along with the people who offloaded the parts of their work they found tough to a machine the first chance they got.
There's no such thing as ape coding. There's still just coding, and vibe coding.
Considering, how some modern attitude works for certain people, and how much power of trends and socials may offer, such terms get boosted over... and you just hope and keep believing in people...
Related: https://medium.com/@nathanladuke/b56da64a09ee (To Those Who Comment Their Opinion Without Reading the Whole Story... I was shocked at how many people simply read the title and then posted their opinion on the whole article...)
https://news.ycombinator.com/newsguidelines.html
Seems like it's a doubly offensive term.
Are there better terms, less encumbered by bigotry, while still covering the "meat space" quality to this development approach?