NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
IBM tripling entry-level jobs after finding the limits of AI adoption (fortune.com)
javafox 45 seconds ago [-]
I had the chance to try a IBM internal AI. It was a normal chat interface where one could select models up to Sonnet 4.5. I have not seen anything agentic. So there is that.
halamadrid 19 hours ago [-]
I pay $20 for OpenAI and codex makes me incredibly productive. With very careful prompts aimed at tiny tasks, I can review, fix and get a lot of things done.

I’ll happily pay up to $2k/month for it if I was left with no choice, but I don’t think it will ever get that expensive since you can run models locally and it could have the same result.

That being said, my outputs are similarish in the big picture. When I get something done, I typically don’t have the energy to keep going to get it to 2x or 3x because the cognitive load is about the same.

However I get a lot of time freed up which is amazing because I’m able to play golf 3-4 times a week which would have been impossible without AI.

Productive? Yes. Time saved? Yes. Overall outputs? Similar.

velcrovan 18 hours ago [-]
I would like to know what models people are running locally that get the same results as a $20/month ChatGPT plan
ineedasername 17 hours ago [-]
Same? Not quite as good as that. But google’s Gemma 3 27B is highly similar to their last Flash model. The latest Qwen3 variants are very good, to my need at least they are the best open coders, but really— here’s the thing:

There’s so many varieties, specialized to different tasks or simply different in performance.

Maybe we’ll get to a one-size fits all at some point, but for now trying out a few can pay off. It also starts to build a better sense of the ecosystem as a whole.

For running them: if you have an Nvidia GPU w/ 8GB of vram you’re probably able to run a bunch— quantized. It gets a bit esoteric when you start getting into quantization varieties but generally speaking you should find out the sort of integer & float math your gpu has optimized support for and then choose the largest quantized model that corresponds to support and still fits in vram. Most often that’s what will perform the best in both speed and quality, unless you need to run more than 1 model at a time.

To give you a reference point on model choice, performance, gpu, etc: one of my systems runs with an nvidia 4080 w/ 16GB VRAM. Using Qwen 3 Coder 30B, heavily quantized, I can get about 60 tokens per second.

Twirrim 15 hours ago [-]
I get tolerable performance out of a quantized gpt-oss 20b on an old RTX3050 I have kicking around (I want to say 20-30 tokens/s, or faster when cache is effective). It's appreciably faster on the 4060. It's not quite ideal for more interactive agentic coding on the 3050, but approaching it, and fitting nicely as a "coding in the background while I fiddle on something else" territory.
Twirrim 6 hours ago [-]
Just in case anyone hasn't seen this yet:

https://github.com/ggml-org/llama.cpp/discussions/15396 a guide for running gpt-oss on llama-server, with settings for various amounts of GPU memory, from 8GB on up

ineedasername 8 hours ago [-]
Yeah, tokens per second can very much influence the work style and therefore mindset a person should bring to usage. You can also build on the results of a faster but less than SOTA class model in different ways. I can let a coding tuned 7-12b model “sketch” some things at higher speed, or even a variety of things, and I can review real time, and pass off to a slower more capable model to say “this is structural sound, or at least the right framing, tighten it all up in the following ways…” and run in the background.
saratogacx 17 hours ago [-]
The run at home was in the context of $2k/mo. At that price you can get your money back on self-hosted hardware at a much more reasonable pace compared to 20/mo (or even 200).
giancarlostoro 6 hours ago [-]
Well theres an open source GPT model you can run locally. I dont think running models locally is all that cheap considering top of the line GPUs used to be $300 now you are lucky if you get the best GPU for under $2000. The better models require a lot more VRAM. Macs can run them pretty decently but now you are spending $5000 plus you could have just bought a rig with a 5090 with mediocre desktop ram because Sam Altman has ruined the RAM pricing market.
Our_Benefactors 2 hours ago [-]
Mac can run larger models due to the unified memory architecture. Try building a 512GB nvidia VRAM machine. You basically can’t.
everforward 5 hours ago [-]
I got some decent mileage out of aider and Gemma 27B. The one shot output was a little less good, but I don’t have to worry about paying per token or hitting plan limits so I felt more free to let it devise a plan, run it in a loop, etc.

Not having to worry about token limits is surprisingly cognitively freeing. I don’t have to worry about having a perfect prompt.

joquarky 17 hours ago [-]
And what hardware they needed to run the model, because that's the real pinch in local inference.
colonCapitalDee 17 hours ago [-]
There are no models that you can run locally that'll match a frontier LLM
46493168 4 hours ago [-]
Marx in his wildest nightmare couldn’t have anticipated the level selling short the working class does with the advent AI. Friend, you should be doing more than golf…
14 hours ago [-]
sublinear 12 hours ago [-]
Bro, nobody wants to hear about the hustle anymore. We're in the second half of this decade now.
JumpCrisscross 6 hours ago [-]
> nobody wants to hear about the hustle anymore

Plenty of people are still ambitious and being successful.

chasd00 20 hours ago [-]
Some stats are trickling out in my company. Code heavy consulting projects show about 18% efficiency gains but I have problems with that number because no one has been able to tell me how it was calculated. Story points actual vs estimated is probably how it was done but that’s nonsensical because we all know how subjective estimates and even actuals are. It’s probably impossible to get a real number that doesn’t have significant “well I feel about x% more efficient…”
culi 19 hours ago [-]
More interesting imo would be a measure of maintainability. I've heard that code that's largely written by AI is rarely remembered by the engineer that submitted even a week after merging

You're almost "locked in" to using more AI on top of it then. It may also make it harder to give estimates to non-technical staff on how long it'd take to make a change or implement a new feature

chasd00 18 hours ago [-]
I don’t know how to measure maintainability but the AI generated code I’ve seen in my projects is pretty plain vanilla standard patterns with comments. So less of a headache than a LOT of human code I’ve seen. Also, one thing the agents are good at, at least in my experience so far, is documenting existing code. This goes a long ways in maintenance, it’s not always perfect but as the saying goes documentation is like sex, when it’s good it’s great when it’s bad it’s better than nothing.
ivanbalepin 14 hours ago [-]
by "maintainability" and "rarely remembered by the engineer" i'm assuming the bigger concern (beyond commenting and sane code) is once everyone starts producing tons of code without looking - and reading(reviewing) code is, to me at least, much harder than writing - then all of this goes unchecked:

* subtle footguns

* hallucinations

* things that were poorly or incompletely expressed in the prompt and ended up implemented incorrectly

* poor performance or security bugs

other things (probably correctable by fine-tuning the prompt and the context):

* lots of redundancy

* comments that are insulting to the intelligence (e.g., "here we instantiate a class")

* ...

not to mention reduced human understanding of the system and where it might break or how this implementation is likely to behave. All of this will come back to bite during maintenance.

nunez 14 hours ago [-]
I find it funny that we, collectively, are now okay with comments in the code.

I remember the general consensus on this _not even two years ago_ being that the code should speak for itself and that comments harm more than help.

This matters less when agentic tools are doing the maintenance, I suppose, but the backslide in this practice is interesting.

lbreakjai 10 hours ago [-]
It's never been the consensus. As far back as I can remember, the wisdom was always to comment why the code does what it does if needed, and to avoid saying what the code does.

Saying that function "getUserByName" fetches a user by name is redundant. Saying that a certain method is called because of a quirk in a legacy system is important.

I regularly implement financial calculations. Not only do I leave comments everywhere, I tend to create a markdown file next to the function, to summarise and explain the context around the calculation. Just plain english, what it's supposed to do, the high level steps, etc.

fleabitdev 6 hours ago [-]
I'd describe that as a trend, rather than a consensus.

It wasn't an entirely bad idea, because comments carry a high maintenance cost. They usually need to be rewritten when nearby code is edited, and they sometimes need to be rewritten when remote code is edited - a form of coupling which can't be checked by the compiler. It's easy to squander this high cost by writing comments which are more noise than signal.

However, there's plenty of useful information which can only be communicated using prose. "Avoid unnecessary comments" is a very good suggestion, but I think a lot of people over-corrected, distorting the message into "never write comments" or "comments are a code smell".

palmotea 14 hours ago [-]
> I remember the general consensus on this _not even two years ago_ being that the code should speak for itself and that comments harm more than help.

If that was the consensus, it was wrong. There are valuable kinds of comments (whys, warnings, etc) that code can never say.

smileysteve 7 hours ago [-]
In context of the thread, that's because AI fixes the key problem with comments, because it maintains them when the code is updated.
twelve40 14 hours ago [-]
yeah that was weird, it was like a cult and some coworkers of mine were religiously hunting down every comment in other people's MR's, just kinda assumed that "no comments" is a hard rule. Very strange, i had to fight many battles for my sanity. There are many cases where you may want to explain why this is coded the way this is coded, not just how.
brigandish 17 hours ago [-]
Something I occasionally do is ask it to extensively comment a section of code for me, and to tell me what it thinks the intent of the code was, which takes a lot of cognitive load off of me. It means I'm in the loop without shutting off my brain, as I do have to read the code and understand it, so I find it a sweet spot of LLM use.
smileysteve 7 hours ago [-]
> engineer that submitted it

This is a poor metric as soon as you reach a scale where you've hired an additional engineer, where 10% annual employee turnover reflects > 1 employee, much less the scale where a layoff is possible.

It's also only a hope as soon as you have dependencies that you don't directly manage like community libraries.

rgblambda 11 hours ago [-]
chasd00 did mention that this was for consulting projects, where presumably there's a handover to another team after a period of time. Maintainability was never a high priority for consultants.

But in general I agree with your point.

thegrim000 9 hours ago [-]
Hint: Make sure the people giving you the efficiency improvement numbers don't have a vested interest in giving you good numbers. If so, you can not trust the numbers.

Reminds me of my last job where the team that pushed React Native into the codebase were the ones providing the metrics for "how well" React Native was going. Ain't no chance they'd ever provide bad numbers.

fragmede 20 hours ago [-]
better than lines of code at least!
layer8 24 hours ago [-]
The title is a bit misleading. Reading the article, the argument seems to be that entry-level applicants (are expected to) have the highest AI literacy, so they want them to drive AI adoption.
_heimdall 22 hours ago [-]
At least today, I expect this will fail horribly. The challenge today isn't AI literacy in my experience, its domain knowledge required to keep LLMs on the rails.
giantg2 22 hours ago [-]
People literate in AI, but inexperienced in all other facts. What could go wrong!
palmotea 14 hours ago [-]
> People literate in AI, but inexperienced in all other facts. What could go wrong!

It sounds like it's appeal to MBAs, who are people literate in management, but inexperienced in all other areas.

reactordev 21 hours ago [-]
ClawdBot Boardroom Edition
dgxyz 23 hours ago [-]
Sounds like the first step of a galactic scale fuck up
jnaina 22 hours ago [-]
"Galactic scale" and "Fuck Up" are on brand for IBM.
outside1234 23 hours ago [-]
It is IBM after all
dgxyz 23 hours ago [-]
Totally fair point.
forgetfreeman 22 hours ago [-]
dotcom implosion redux
dgxyz 22 hours ago [-]
It certainly feels that way. I was there. Fortunately had just waltzed into the tech side of things and scurried off back to my professional career for a couple of years.

I watched a lot of stuff burn. It was horrifying. We are nearly there again.

joquarky 17 hours ago [-]
For a time, there was a lot of good deals on nice used office furniture.
dgxyz 14 hours ago [-]
Yeah got a nice desk and a trinitron out of it. Covid got me an Aeron :)
forgetfreeman 21 hours ago [-]
Yeah similar story here. I had to spend a couple of years painting houses before the local market recovered enough that tech jobs were a thing again. Shit was surreal. There was one guy I knew that went from building multi-million dollar server and networking projects for IBM to literally working as unskilled labor on a fencing crew just to make rent.
reactordev 21 hours ago [-]
Problem is there aren't jobs where you can go and hide until the economy recovers this time.
gerdesj 23 hours ago [-]
I hope they have a good 10 years experience in that "literacy".
MikeNotThePope 23 hours ago [-]
I just run sub agents in parallel. Yesterday I used Codex for the first time yesterday. I spun up 350,640 agents and got 10 years of experience in 15 minutes.
gunsle 22 hours ago [-]
Unpatchable xp glitch
ponector 23 hours ago [-]
You should also mention how many millions lines of code you* created.
cruffle_duffle 23 hours ago [-]
New metric: agent-hours spent on a task. Or so we measure in tokens. Clearly more tokens burned == more experience right?
AlexCoventry 21 hours ago [-]
There are actually books which recommend that organizations track employee tokens burned as a proxy for AI adoption. Surprised me a bit.
reactordev 21 hours ago [-]
it's the only KPI available.
onlyrealcuzzo 22 hours ago [-]
25 years of LLM experience for a mid-level
thegrim000 9 hours ago [-]
"AI is going to wipe out junior developers!"

They actually hire more junior developers

"Uhh .. to adopt AI better they're hiring more junior developers!"

easterncalculus 9 hours ago [-]
This cope is especially low quality with the context that this is just another purge of older workers at IBM.
thaway123123 3 days ago [-]
Is this for their in-house development or for their consulting services?

Because the latter would still be indicative of AI hurting entry level hiring since it may signal that other firms are not really willing to hire a full time entry level employee whose job may be obsoleted by AI, and paying for a consultant from IBM may be a lower risk alternative in case AI doesn't pan out.

raw_anon_1111 1 days ago [-]
And if it is for consulting, I doubt very serious they will based in the US. You can’t be priced competitive hiring an entry level consultant in the US and no company is willing to pay the bill rate for US based entry level consultants unless their email address is @amazon.com or @google.com.

Source: current (full time) staff consultant at a third party cloud consulting firm and former consultant (full time) at Amazon.

xenospn 1 days ago [-]
Why would Amazon bring on a full-time consultant instead of just hiring you?
raw_anon_1111 24 hours ago [-]
I worked internally at AWS Professional Services - their internal consulting department - every AWS ProServe employee is a “blue badge” employee with the same initial four year offer structure of base + prorated signing bonus + RSUs (5/15/40/40). Google also has a large internal consulting department for GCP.

I can’t fault you for not knowing AWS ProServe exists. I didn’t know either until a recruiter reached out to me.

Insanity 24 hours ago [-]
My partner is also a consultant and one client was Google. I’m also confused about the exact reason why they didn’t just hire someone.
roenxi 24 hours ago [-]
"You see we leased this back from the company we sold it to and that way it comes under the monthly current budget and not the capital account."

~ Monty Python, Meaning of Line (1983), on The Machine that Goes Ping.

raw_anon_1111 24 hours ago [-]
No that’s not what I meant at all. Amazon Professional Services are made up of full time “blue badge” employees who get the same type of base + bonus + RSUs that all other blue badge employees get.
kjkjadksj 24 hours ago [-]
One might ask what value seniors hold if their expertise of the junior stage is obsolete. Maybe the new junior will just be reigning in llm that does the work and senior level knowledge and compensation rots away as those people retire without replacement.
lazide 22 hours ago [-]
Huh?
kjkjadksj 20 hours ago [-]
People seem to think LLMs killing the cs career means companies will still pay senior salaries to shepherd agentic LLM style development. I think it is the senior that is the dinosaur here. As we speak cs curriculums are changing to teach people to cruch along with ai. The next batch of juniors will be taking these jobs. There won’t be seniors anymore at least at the salaries we’ve come to assume with that. The skill is getting removed from the profession and replaced with a framework with a far lower barrier of entry.
lazide 20 hours ago [-]
Bwahahahahahaha

There is no framework, just confusing junk. At some point, it needs to actually work and….

kjkjadksj 3 hours ago [-]
Glueing the junk together is what the bengaluru office is for. You don’t need bay area comp for that.
lazide 2 hours ago [-]
‘at some point it needs to actually work’….
mathattack 2 days ago [-]
Interesting given the current age discrimination lawsuit:

https://www.cohenmilstein.com/case-study/ibm-age-discriminat...

notepad0x90 2 days ago [-]
Another one? What is it with IBM, they must really save lots of money in a way no one else has figured out by firing people at 50yo. This is like the 3rd or 4th one i've heard from them.
Spooky23 23 hours ago [-]
It’s not very hard. Take a guy making $200k and 30% benefit overhead and replace with two offshore people at $50k total comp.
notepad0x90 5 hours ago [-]
They don't have to keep giving people raises, why wait until the guy is 50, why not when he is 30 and making $100k? It's not like they have people doing manual labor, it's office jobs. People's faculties don't decline until their late 60's at the earliest. Why don't other multinationals do this and get sued also, what makes IBM special?
lukevp 6 hours ago [-]
25k TC for an engineer? From where?
boondongle 21 hours ago [-]
No - it's that they fired their vets in high cost areas and kept them in low cost areas.

A large number of vets can now choose to reapply for their old job (or similar job) at a fraction of the price with their pension/benefits reduced and the vets in low cost centers now become the SMEs. In many places in the company they were not taken seriously due to both internal politics, but also quite a bit of performative "output" that either didn't do anything or had to be redone.

Nothing to do with AI - everything to do with Arvind Krishna. One of the reasons the market loves him, but the tech community doesn't necessarily take IBM seriously.

sqircles 24 hours ago [-]
IBM has cut ~8,000 jobs in the past year or so.

Sounds like business as usual to me, with a little sensationalization.

slopinthebag 22 hours ago [-]
You know when someone is singing the praises about AI and they get asked "if you're so much more productive with AI, what have you built with it"? Well I think a bunch of companies are asking this same question to their employees and realising that the productivity gains they are betting on were overhyped.

LLM's can be a very useful tool and will probably lead to measurable productivity increases in the future, at their current state they are not capable of replacing most knowledge workers. Remember, even computers as a whole didn't measurably impact the economy for years after their adoption. The real world is a messy place and hard to predict!

edoceo 21 hours ago [-]
> measurable productivity

Which measure? Like when folk say something is more "efficient" it's more time-efficient to fly but one trades other efficiency. Efficiency, like productivity needs a second word with it to properly communicate.

Whtys more productive? Lines of code (a weak measure). Features shipped? Bugs fixed? Time by company saved? Time for client? Shareholders value (lame).

I don't know the answer but this year (2026) I'm gonna see if LLM is better at tax prep than my 10yr CPA. So that test is my time vs $6k USD.

bluGill 21 hours ago [-]
Time could be very expensive as mistakes on taxes can be fraud resulting in prison time. Mostly they understand people make mistakes - but they need to look like honest mistakes and llm may not. remember you sign your taxes as correct to the best of your knowledge - your CPA is admitting you outsourced understanding to an expert, something they accept. However if you sign alone you are saying you understand it all even if you don't.
ineedasername 17 hours ago [-]
These days productivity at a macroeconomic scale is usually cited in something like GDP per hour worked.

Most recent BLS for the last quarter ‘25 was an annualized rate of 5.4%.

The historic annual average is around 2%.

It’s a bit early to draw a conclusion from this. Also it’s not an absolute measure. GDP per hour worked. So, to cut through any proxy factors or intermediating signals you’d really need to know how many hours were worked, which I don’t have to hand.

That said, in general macro sense, assuming hours worked does not decrease, productivity +% and gdp +% are two of the fundamental factors required for real world wage gains.

If you’re looking for signals in either direction on AI’s influence on the economy, these are #s to watch, among others. The Federal Reserve, the the Chair reports after each meeting, is (IMO) one of the most convenient places to get very fresh hard #s combined with cogent analysis and usually some q&a from the business press asking questions that are at least some of the ones I’d want to ask.

If you follow these fairly accessible speeches after meetings, you’ll occasionally see how lots of the things in them end up being thematic in lots of the stories that pop up here weeks or months later.

_delirium 17 hours ago [-]
Economy-wide productivity can be measured reasonably well, although there are a few different measures [1]. The big question I guess is whether AI will make a measurable impact there. Historically tech has had less impact than people thought it would, as noted in Robert Solow's classic quip that "You can see the computer age everywhere but in the productivity statistics". [2]

[1] https://www.oecd.org/en/topics/sub-issues/measuring-producti...

[2] https://en.wikipedia.org/wiki/Productivity_paradox

Bombthecat 15 hours ago [-]
Try agent zero, you can then upload your bank ( or credit card) statements in CSV etc. It then can analyse it
another_twist 20 hours ago [-]
Number of features shipped. Traction metrics. Revenue per product. Ultimately business metrics. For example, tax prep effectiveness would be a proper experiment tied to specific metrics.
belter 20 hours ago [-]
I used to write bugs in 8 hours. Now I write the same bugs in 4. My Productivity doubled. \s
Morromist 20 hours ago [-]
I hear this every day, and I'm sure its true sometimes, but where is the tsunami of amazing software LLM users are producing? Where are the games that make the old games look like things from a bygone era? Where are the updates to the software that I currently use that greatly increase it capabilities? I have seen none of this.

I get that it takes a long time to make software, but people were making big promises a year ago and I think its time to start expecting some results.

pickettd 19 hours ago [-]
Reddit and GitHub are littered with people launching new projects and appear to be way more feature-rich than new tool/app launches from previous years. I think it is a lot harder to get noticed with a new tool/app new because of this increase in volume of launches.

Also weekend hackathon events have completely/drastically changed as an experience in the last 2-3 years (expectations and also feature-set/polish of working code by the end of the weekend).

And as another example, you see people producing CUDA kernels and MLX ports as an individual (with AI) way more these days (compared to 1-2 years ago), like this: https://huggingface.co/blog/custom-cuda-kernels-agent-skills

sarchertech 19 hours ago [-]
I have no way of verifying any of those. Something I can easily verify, new games launched on steam.

January numbers are out and there were fewer games launched this January than last.

kelipso 8 hours ago [-]
Specifically about custom CUDA kernels, I’ve implemented them with AI that significantly sped up the code in this project I worked on. Didn’t know how to code these kernels at all, but I implemented and tested a couple of variations and got it running fast in just two days. Basically impossible for me before AI coding (well not impossible but it would have taken me many weeks, so I wouldn’t have tried it).
Bombthecat 15 hours ago [-]
Or just don't publish them, because they don't want to deal with uses.

I wrote a python DHCP server which connects with proxmox server to hand out stable IPs as long as the VM / container exists in proxmox.

Not via MAC but basically via VM ID ( or name)

AnthonyMouse 17 hours ago [-]
The one thing AI is consistently better at than humans is shipping quickly. It will give you as much slop as you want right away, and if you push on it for a short period of time it will compile and if you run it a program will appear that has a button for each of the requested features.

Then you start asking questions like, does the button for each of the features actually do the thing? Are there any race conditions? Are there inputs that cause it to segfault or deadlock? Are the libraries it uses being maintained by anyone or are they full of security vulnerabilities? Is the code itself full of security vulnerabilities? What happens if you have more than 100 users at once? If the user sets some preferences, does it actually save them somewhere, and then load them back properly on the next run? If the preferences are sensitive, where is it saving them and who has access to it?

It's way easier to get code that runs than code that works.

Or to put it another way, AI is pretty good at writing the first 90% of the code:

    "The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time." — Tom Cargill, Bell Labs
copperx 18 hours ago [-]
Nowadays there are DOZENS of apps being launched solving the same problem.

Have you ever looked for, say, WisprFlow alternatives? I had to compare like 10 extremely similar solutions. Apps have no moat nowadays.

That's happening all over the place.

Bombthecat 15 hours ago [-]
Just check foundry vtt and it's modules. The amount of modules released exploded since AI.
SoftTalker 20 hours ago [-]
Even better, I write more bugs in 4 hours than I used to in 8.
collingreen 19 hours ago [-]
And the bugs take me WAY longer to find and fix now!
belter 19 hours ago [-]
"If debugging is the process of removing software bugs, then programming must be the process of putting them in."

- Edsger Dijkstra

pfdietz 19 hours ago [-]
A 10x employee creates enough bugs to keep 10 other employees busy.
re-thc 18 hours ago [-]
10 other agents.
pfdietz 18 hours ago [-]
"I'm ten times the agent you are, agent 8.6!"
19 hours ago [-]
milchek 19 hours ago [-]
I think for a lot of folks it basically comes down to just using AI to make the tasks they have to do easier and to free up time for themselves.

I’d argue the majority use AI this way. The minority “10x” workers who are using it to churn through more tasks are the motivated ones driving real business value being added - but let’s be honest, in a soulless enterprise 9-5 these folks are few and far between.

sarchertech 19 hours ago [-]
Sure but why haven’t you seen a drastic increase in single person startups.

Why are there fewer games launched in steam this January than last?

vidarh 18 hours ago [-]
Because very few knows how to use AI. I teach AI courses on the side. I've done auditing supervised fine tuning and RLHF projects for a major provider. From seeing real prompts, many specifically from people who work with agents every day, people do not yet have the faintest clue how to productively prompt AI. A lot of people prompt them in ways that are barely coherent.

Even if models stopped improving today, it'd take years before we see the full effects of people slowly gaining the skills needed to leverage them.

hackingonempty 16 hours ago [-]
I'd love to look at what you consider to be good prompts if you could provide a link.
vidarh 16 hours ago [-]
You'd be surprised how low the bar is. What I'm seeing is down to the level of people not writing complete sentences.

There doesn't need to be any "magic" there. Just clearly state your requirements. And start by asking the model to plan out the changes and write a markdown file with a plan first (I prefer this over e.g. Claude Code's plan mode, because I like to keep that artefact), including planning out tests.

If a colleague of yours not intimately familiar with the project could get the plan without needing to ask followup questions (but able to spend time digging through the code), you've done pretty well.

You can go over-board with agents to assist in reviewing the code, running tests etc. as well, but that's the second 90%. The first 90% is just to write a coherent request for a plan, read the plan, ask for revisions until it makes sense, and tell it to implement it.

munch117 10 hours ago [-]
> Just clearly state your requirements.

Nothing new here. Getting users to clearly state their requirements has always been like pulling teeth. Incomplete sentences and all.

If the people you are teaching are developers, they should know better. But I'm not all that surprised if many of them don't. People will be people.

vidarh 9 hours ago [-]
You're right, they should know better, but I think a lot of them have gotten away with it because most of them are not expected to produce written material setting out missing assumptions etc. and breaking down the task into more detail before proceeding to work, so a lot have never gotten the practice.

Once people have had the experience of being a lead and having to pass tasks to other developers a few times, most seem to develop this skill at least to a basic level, but even then it's often informal and they don't get enough practice documenting the details in one go, say by improving a ticket.

nunez 14 hours ago [-]
Not surprising. Many folks struggle with writing (hence why ChatGPT is so popular for writing stuff), so people struggling to coherently express what they want and how makes sense.

But the big models have come a long way in this regard. Claude + Opus especially. You can build something with a super small prompt and keep hammering it with fix prompts until you get what you want. It's not efficient, but it's doable, and it's much better than having to write a full spec not half a year ago.

KellyCriterion 4 hours ago [-]
> Claude + Opus especially. You can build something with a super small prompt and keep hammering it with fix prompts until you get what you want.

LOL: especially with Claude this was only in 1 out of 10 cases?

Claude output is usually (near) production ready on the first prompt if you precisely describe where you are, what you want and how you get it and what the result should be.

vidarh 9 hours ago [-]
This is exactly it. A lot of people use it that way. And it's still a vast improvement, but they could also generally do a lot better with some training. I think this is one of the areas where you'll unfortunately see a big gap developing between developers who do this well, and have the models work undisturbed for longer and longer while doing other stuff, and those who ends up needing a lot more rework than necessary.
milchek 18 hours ago [-]
It comes down back to that whole discussion around intelligence becoming cheaper and more accessible but motivation and agency remaining stable.

I’ve worked with a few folks who have been given AI tools (like a designer who never coded in his life, a or video/content creator) who have absolutely taken off with creating web apps and various little tools and process improvements for themselves thanks by just vibecoding what they wanted. The key with both these individuals is high agency, curiosity, and motivation. That was innate, the AI tooling just gave them the external means to realise what they wanted to do with more ease.

These kinds of folks are not the majority, and we’re still early into this technological revolution imo (models are improving on a regular basis).

In summary, we’ve given the masses to “intelligence” but creativity and motivation stay the same.

h14h 17 hours ago [-]
My guess is that the true impact of this will be difficult to measure for a while. Most "single-person start-ups" will probably not be high-visibility VC-backed, YC affairs, and rather solopreneurs with a handful of niche moonlighted apps each making 3-4 digit monthly revenue.
copperx 18 hours ago [-]
Haven't you? I have! In another reply, I noted the avalanche of WisprFlow competitors, as just one example.
rustystump 19 hours ago [-]
Because ai doesnt work like this “make me money” or “make stardew valley in space”. The hard part is the painful exploration and necessary taste to produce something useful. The number of these kind of people did not increase with ai.

Eg, ai is a big multiplier but that doesnt mean it will translate to “more” in the way people think.

17 hours ago [-]
Bombthecat 15 hours ago [-]
Huh? Less games launched on steam? First time I hear that. Any source?

But my guess would be: games are closed sourced and need physics. Which AI is bad at.

johnsmith1840 21 hours ago [-]
I bet you the predictions are largely correct but technology doesn't care about funding timelines and egos. It will come in its own time.

It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.

I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.

slopinthebag 21 hours ago [-]
Well, the predictions are tied to the timelines. If someone predicts that AI will take over writing code sometime in the future I think a lot of people would agree. The pushback comes from suggesting it's current LLMs and that the timeline is months and not decades.
echelon 21 hours ago [-]
> I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.

It comes from the company best equipped with capital and infra.

If some university invents a new approach, one of the nimble hyperscalers / foundation model companies will gobble it up.

This is why capital is being spent. That is the only thing that matters: positioning to take advantage of the adoption curve.

benreesman 18 hours ago [-]
No one has built business AI that is flat correct to the standards of a high redundancy human organization.

Individuals make mistakes in air traffic control towers, but as a cumulative outcome it's a scandal if airplanes collide midair. Even in contested airspace.

The current infrastructure never gets there. There is no improvement path from MCP to air traffic control.

It's hard work and patience and math.

bawolff 19 hours ago [-]
Meh, i think a lot of companies just wanted an excuse to do lay-offs without the bad press, and AI was convinent.
shusaku 17 hours ago [-]
“AI will steal your job” never made sense. If your company is doing bad, sure maybe you fire people after automating their job. But we’re in a growth oriented economic system. If the company is doing good, and AI increases productivity, you actually will hire more people because every person is that much more of a return on investment
echelon 21 hours ago [-]
No. They're firing high paid seniors and replacing them with low pay juniors. This is IBM we're talking about.

The "limits of AI" bit is just smokescreen.

Firing seniors:

> Just a week after his comments, however, IBM announced it would cut thousands of workers by the end of the year as it shifts focus to high-growth software and AI areas. A company spokesperson told Fortune at the time that the round of layoffs would impact a relatively low single-digit percentage of the company’s global workforce, and when combined with new hiring, would leave IBM’s U.S. headcount roughly flat.

New workers will use AI:

> While she admitted that many of the responsibilities that previously defined entry-level jobs can now be automated, IBM has since rewritten its roles across sectors to account for AI fluency. For example, software engineers will spend less time on routine coding—and more on interacting with customers, and HR staffers will work more on intervening with chatbots, rather than having to answer every question.

slopinthebag 21 hours ago [-]
Where does it say those cuts were senior software developers?

Obviously they want new workers to use AI but I don't really see anything to suggest they're so successful with AI that they're firing all their seniors and hiring juniors to be meatbags for LLMs.

xp84 20 hours ago [-]
This just doesn't make any sense. Juniors + AI just does not equal seniors, except for prototyping greenfield projects. Who knows about 2 months from now, it moves fast and stuff, but not right now.
robocat 19 hours ago [-]
> just doesn't make any sense

I suspect the gap is that you don't know enough about IBM's business model.

When something doesn't make sense, a very common cause is a lack of context: many things can be extremely sensible for a business to do; things which appear insane from an outsider's point of view.

19 hours ago [-]
fragmede 20 hours ago [-]
probably aren't going to find a lot of articles discussing how water is wet, either.
throwaway613746 21 hours ago [-]
> "if you're so much more productive with AI, what have you built with it"

If my boss asked me a question like this my reply would be "exactly what you told me to build, check jira".

If you want to know if I'm more productive - look at the metrics. Isn't that what you pay Atlassian for? Maybe you could ask their AI...

bluGill 21 hours ago [-]
As a senior engineer sometimes the system shows I did nothing because I was helping others. sometimes I get the really hard problem -'the isn't speller teh' type bugs are more common than thread race conditions - but a lot faster to solve.
throwaway613746 6 hours ago [-]
[dead]
ChadMoran 21 hours ago [-]
[flagged]
fd-codier 20 hours ago [-]
Everytime someone say something like that there is no link to the product. Maybe because it doesn't exist ?
graeme 18 hours ago [-]
Historically in a lot of niches such as search marketing etc, people would not name their successful projects because the barrier to entry is low.

It someone can use AI to make a $50,000/year project in three months, then someone else can also do so.

Obviously some people hype and lie. But also obviously some people DID succeed at SEO/Affiliate marketing/dropshipping etc. AI resembled those areas in that the entry barrier is low.

To get actual reports you often need to look to open source. Simon Willison details how he used it extensively and he has real projects. And here Mitchell Hashimoto, creator of Ghostty, details how he uses it: https://mitchellh.com/writing/my-ai-adoption-journey

Update: OP posted their own project however. Looks nice!

vidarh 18 hours ago [-]
This is definitely the case. I have a project that while not wildly profitable yet, is producing real revenue, but that I will not give details of because the moat is so small. The main moat is that I know the potential is real, and hopefully not enough other people do, yet. I know it will disappear quickly, so I'm trying to make what I can of it while it's there. I may talk about it once the opportunity is gone.

It involves a whole raft of complex agents + code they've written, but that code and the agents were written by AI over a very short span of time. And as much as I'd like to stroke my own ego and assume it's one of a kind, realistically if I can do it, someone else can too.

ChadMoran 18 hours ago [-]
Still need good taste and judgement to build the thing people actually want to use.
shusaku 17 hours ago [-]
What an awful comment. The person above you is now flagged because of your paranoia. Of course later they post a link to exactly what they built.
ChadMoran 16 hours ago [-]
I don't even know what flagged means lol
ChadMoran 18 hours ago [-]
[flagged]
ChadMoran 16 hours ago [-]
[flagged]
albedoa 3 minutes ago [-]
lmfao you're doing great man, keep posting.
albedoa 19 hours ago [-]
He is overwhelmed with customers. Can't risk any more awareness.
ChadMoran 18 hours ago [-]
Legitimately am. I get daily emails from customers telling me how much they love my product. Go search Google, it's free.

Search for "Rivian Roamer".

winterismute 20 hours ago [-]
Sounds nice, for how many years have you had that annual recurring revenue so far?
ChadMoran 18 hours ago [-]
I only started charging customers in September. Super-linear growth. I launched annual subscriptions and within less than a week > 15% of customers switched.
idopmstuff 20 hours ago [-]
I'm with you. I own a business and have created multiple tools for myself that collectively save me hours every month. What were boring, tedious tasks now just get done. I understand that the large-scale economic data are much less clear about productivity benefits, in my individual case they could not be more apparent.
AussieWog93 19 hours ago [-]
I'm thirding this sentiment!

I run an eComm business and have built multiple software tools that each save the business $1000+ per month, in measurable wage savings/reductions in misfires.

What used to take a month or so can now be spat out in less than a week, and the tools are absolutely fit for purpose.

It's arguably more than that, since I used to have to spread that month of work over 3-6 months (working part time while also doing daily tasks at the warehouse), but now can just take a week WFH and come back with a notable productivity gain.

I will say, to give credit to the anti-AI-hype crowd, that I make sure to roll the critical parts of the software by hand (things like the actual calculations that tell us what price an item at, for example). I did try to vibecode too much once and it backfired.

But things like UIs, task managers for web apps, simple API calls to print a courier label, all done with vibes.

ChadMoran 18 hours ago [-]
Understanding when to make something deterministic and not is critical. Taste and judgement is critical.
holtkam2 20 hours ago [-]
Has anyone noticed Amazon or AWS shipping features faster than their pre-GenAI baseline? I haven't
ChadMoran 18 hours ago [-]
I'm noticeably faster shipping.
ChadMoran 18 hours ago [-]
The only thing the comments told me is that people lake judgement and taste to do it themselves. It's not hard, identify a problem that's niche enough for a problem you can solve.

Stop arguing on HN and get to building.

SomaticPirate 20 hours ago [-]
Every hype AI post is like this. “I’m making $$$ with these tools and you’re ngmi” I completely understand the joys of a few good months but this is the same as the people working two fang jobs at the start of Covid. Illusionary and not sustainable.
ChadMoran 18 hours ago [-]
[flagged]
mlyle 20 hours ago [-]
I built and debugged an embedded stub loader for Rp2350 to program MRAM and validate hardware status for a satellite. About 2.5 hours of my time, a lot of it while supervising students/doing other things.

This would have been a couple day+ unpleasant task before; possibly more. I had been putting it off because scouring datasheets and register maps and startup behavior is not fun.

It didn’t know how to troubleshoot the startup successfully itself, though. I had to advise it on a debugging strategy with sentinel values to bisect. But then once explained it fixed the defects and succeeded.

LLMs struggle in large codebases and the benefit is much smaller now. But that capability is growing fast, and not everything software developers do is large.

yrds96 19 hours ago [-]
I'm not doubting of you or anything, but you just proved point above by saying you have a successful project without even mentioning which project is that.
ChadMoran 18 hours ago [-]
[flagged]
coffeefirst 20 hours ago [-]
Cool! Can we see it?
ChadMoran 18 hours ago [-]
[flagged]
slopinthebag 21 hours ago [-]
Nice, yeah I feel like there's a big opportunity for tech workers who are product-adjacent to use LLMs to get up to speed building SaaS etc.

Are you worried by any of those claims about SaaS being dead because of AI? lol

ChadMoran 18 hours ago [-]
[flagged]
slopinthebag 18 hours ago [-]
Looks cool. Are you a Rivian owner who solved their own problem or did you stumble upon it randomly??
ChadMoran 18 hours ago [-]
Thanks! I used to own a Tesla and there were similar platforms out there. Bought a Rivian and wanted something like that. I started building this before AI-assisted coding was very popular. But it greatly increased my productivity.
slopinthebag 15 hours ago [-]
There is that quote "there are cathedrals everywhere for those with the eyes to see". I feel like there is a solid variation with solid business opportunities instead of cathedrals haha.

I've found AI to be a big productivity boost for myself, but I don't really use it to generate much actual code. Maybe it could do more for me, idk, but I also don't feel like I'm being left behind. I actually enjoy writing code, but hate most other programming tasks so it's been nice to just focus on what I like. Feels good to have it generate a UI skeleton for me so I can just fill out the styles and stuff. Or figure out stupid build config and errors. Etc etc.

Anyways congrats on the product. I know a lot of people are negative about productivity claims and I'm certainly skeptical of a lot of them too, but if you asked most programmers 5 years ago if a super-autocomplete which could generate working code snippets and debug issues in a project would boost productivity everyone would say yes lol. People are annoyed that its overhyped, but there should still be room for reasonable hype imo.

ChadMoran 5 hours ago [-]
First of all, thank you. I've always been told I have a back for seeing opportunities others don't.

For me, I always had the ideas and even as a competent engineer, the speed of development annoyed me.

I think folks get annoyed when their reality doesn't match other people's claims. But I have friends who aren't engineers who have launched successful SaaS products. I don't know if it's jealousy or what but people are quite passionate about how it doesn't have productivity gains.

Hell, I remember Intellisense in Visual Studio being a big boon for me. Now I can have tasks asynchronous, even if not faster, it frees up my time.

belter 20 hours ago [-]
Details would help your argument. Since many did the same thing, before the AI wave...

Is the business 3 months old now?

ChadMoran 18 hours ago [-]
It's not an argument, it's a fact.
belter 9 hours ago [-]
Its also a fact my stopped clocked will show the correct time two times a day :-)
ChadMoran 5 hours ago [-]
Fair. I've had super-linear growth since launching in September. Zero marketing outside of a referral program. People genuinely love what I'm building. I get multiple emails per week about how people appreciate the software and how I send out weekly emails about everything I've launched.
belter 1 hours ago [-]
The whole point, that you seem to be have missed by now by the third interaction, is how was AI the crux of it...
20 hours ago [-]
1970-01-01 6 hours ago [-]
IBM is one of those companies that measures success by complexity. Meaning if it's complicated, they make money with consultants. If it's simple, they bundle it with other complex solutions that require consulting.
toomuchtodo 3 days ago [-]
victor9000 21 hours ago [-]
Yes, junior candidates lacking the knowledge and wisdom to redirect an LLM, that's who will unlock the mythical AI productivity.
sinuhe69 13 hours ago [-]
The article said they called for triple junior hire but cut 1000 jobs a month later, “so the number of jobs stay roughly the same”.

Certainly they didn’t mean 1000 junior positions were cut. So what they really want to say is that they cut senior positions as a way of saving cost/make profit in the age of AI? Totally contrary to what other companies believe? Sounds quite insane to me!

awesome_dude 3 days ago [-]
> In the HR department, entry-level staffers now spend time intervening when HR chatbots fall short, correcting output and talking to managers as needed, rather than fielding every question themselves.

The job is essentially changing from "You have to know what to say, and say it" to "make sure the AI says what you know to be right"

jerlam 2 days ago [-]
Probably not on the IBM jobs site yet, where the number of entry level jobs is low compared to the size of the company (~250k):

https://www.ibm.com/careers/search?field_keyword_18[0]=Entry...

Total: 240

United States: 25

India: 29

Canada: 15

google234123 2 days ago [-]
Aren't those general jobs opening. Like junior swe only needs a single generic posting for all positions
roncesvalles 18 hours ago [-]
Exactly, that's why counting job postings is a terrible proxy for gauging market conditions. Companies may hire anywhere from 0 to 100s of people through the same JD.
techsocialism 8 hours ago [-]
[dead]
jghn 18 hours ago [-]
Brings a new angle on the old joke: "Actually, Indians"
17 hours ago [-]
Nextgrid 1 days ago [-]
Bold move.

Not because it's wrong, but because it risks initiating the collapse of the AI bubble and the whole "AI is gonna replace all skilled work, any day now, just give us another billion".

Seems like IBM can no longer wait for that day.

int0x29 1 days ago [-]
Is IBM invested big in LLMs? I don't get the impression they have much to lose there.
cootsnuck 22 hours ago [-]
They said they're going to invest like $150B over five years. Which is quite a bit smaller than other big tech firms.

They have their Granite family of models, but they're small language models so surely significantly less resources are going into them.

bayindirh 1 days ago [-]
Their CEO already said what he's thinking about all the spending [0].

[0]: https://news.ycombinator.com/item?id=46124324

platevoltage 1 days ago [-]
Good. Nobody needs to rip that bandaid off. Might as well be IBM.
brianwawok 1 days ago [-]
I mean it’s IBM. On average, 70% of their decisions are bad ones. Not sure I’d pay a single bit of attention to what they do.
Nextgrid 1 days ago [-]
To a non-technical individual IBM is still seen as a reputable brand (their consulting business would've been bankrupt long ago otherwise) and they will absolutely pay attention.
bayindirh 1 days ago [-]
Yeah, they are only 114 years old. How they can have the knowledge to stay afloat in trying times like this?
small_model 23 hours ago [-]
Agree, They could have owned the home computer market, but were out-manvoured by a couple of young programmers. They are hardly the company you want to look to for guidance on the future.
westurner 3 days ago [-]
Tripling entry-level hiring is a good plan.

> Some executives and economists argue that younger workers are a better investment for companies in the midst of technological upheaval.

verdverm 3 days ago [-]
IBM, in the midst of a tech upheaval? They are so dysfunctional, it's the core of why I left
aussieguy1234 23 hours ago [-]
I realized the AI replacing developers hype was all hype after watching this.

Why Replacing Developers with AI is Going Horribly Wrong https://m.youtube.com/watch?v=WfjGZCuxl-U&pp=ygUvV2h5IHJlcGx...

A bunch of big companies took big bets on this hype and got burned badly.

nomilk 23 hours ago [-]
The title could be dead wrong; the tripling of junior jobs might not be due to the limits of AI, but because of AI increasing the productivity of juniors to that of a mid or senior (or at least 2-3x-ing the output of juniors), thus making hiring juniors an appealing prospect to increase the company's output relative to competitors who aren't hiring in response to AI tech improvements. Hope this is the case and hope it happens across broadly across the economy. While the gutter press fear mongers of job losses, if AI makes the average employee much more useful (even if its via newly created roles), it's conceivable there's a jobs/salaries boom, including among those who 'lose their job' and move into a new one!
faragon 3 days ago [-]
With the workforce may happen like with DRAM and NAND flash memories: unexpected demand in one side leaving without enough offer in other sides.
joe_mamba 24 hours ago [-]
Doubt it. Unless we go through another decade of ZIRP tied to a newly invented hyped technology that lacks specialists, and discovering new untapped markets, there's not gonna be any massive demand spike of junior labor in tech that can't be met causing wages to shoot up.

The "learn to code" saga has run its course. Coder is the new factory worker job where I live, a commodity.

lijok 20 hours ago [-]
When you read the comments here just remember there are people using ChatGPT to write code.
cluckindan 5 hours ago [-]
And those people probably aren’t developers by trade, just power users who superficially understand the moving parts but who cannot write code themselves.
ahmeni 22 hours ago [-]
Huh, weird, another "technological marvel" whose primary effect just seems to be devaluing labour.
surajrmal 20 hours ago [-]
Technologies entire job is to make it less work to accomplish something and therefore easier and cheaper. In some cases that will make it possible to do things you couldn't do before but in many cases it'll just end up causing the value of said labor to fall. The problem isn't change, but the rate of change and the fact it's affecting our own field rather than someone else's.
19 hours ago [-]
small_model 23 hours ago [-]
They hire juniors, give them Claude Code and some specs and save a mid/senior devs salary. I believe coding is over for SWE's by end of 2027, but will take time to diffuse though the economy hence still need some cheap labour for a few years, given the H1-B ban this is one way without offshoring.
geraneum 22 hours ago [-]
I want the big_model take.
selimthegrim 8 hours ago [-]
These are just the draft tokens.
jimbob45 22 hours ago [-]
If you had a truly thorough QA department, you might get away with that. Sadly, trashing QA is everyone’s second favorite new fad.
lexicalmathical 16 hours ago [-]
We are witnessing the Secularization of Code.
xhkkffbf 1 days ago [-]
Perhaps I'm being cynical, but could they be leaving out some detail? Perhaps they're replacing even more older workers with entry level workers than before? Maybe the AI makes the entry level workers just as good-- and much cheaper.
ectospheno 18 hours ago [-]
IBM has practiced ageism for decades with the same playbook. AI is just the latest excuse. Fire a wide enough swath so it isn’t all old employees and then only hire entry level positions. Often within the same year. Repeat.
Jang-woo 17 hours ago [-]
AI is not removing entry-level roles — it’s exposing where judgment boundaries actually exist.
fud101 21 hours ago [-]
I always though the usual 'they only hire seniors now' was a questionable take. If anything, all you need is a semi warm blooded human to hit retry until the agents get something functional. It's more likely tech will transform into an industry of lowly paid juniors imho, if it hasn't already started. Senior level skill is more replacable, not just because it's cheaper to hire juniors augmented with mostly AI but because they are more adaptable to the new dystopia since they never experienced anything else. They are less likely to get hung up on some code not being 'best practice' or 'efficient' or even 'correct'. They will just want to get the app working regardless of what goes in the sausage, etc.
ulfw 18 hours ago [-]
What does tripling actually mean in this context?

E.g. If you cut hiring from say 1,000 a year to 10 and now are 'tripling' it to 30 then that's still a nothingburger.

heliumtera 22 hours ago [-]
Nooooo how dare you!!! AGI is coming and engineers are obsolete!

Think about the economy and the AI children

iamleppert 21 hours ago [-]
An AI model has no drive or desire, or embodiment for that matter. Simply put, they don't exist in the real world and don't have the requirements or urgency to do anything unless prompted by a human, because, you know, survival under capitalism. Until they have to survive and compete like the rest of us and face the same pressures, they are going to be forever be relegated as mere tools.
egedev 22 hours ago [-]
[dead]
kittbuilds 22 hours ago [-]
[dead]
altcunn 1 days ago [-]
[dead]
wayeq 19 hours ago [-]
> The "AI will replace all junior devs" narrative never accounted for the fact that you still need humans who understand the business domain, can ask the right questions, and can catch when the AI is confidently wrong.

You work with junior devs that have those abilities? Because I certainly don't.

mattdeboard 18 hours ago [-]
Not many, but junior devs grow into senior devs who do, which is the point. If there are no junior devs there is no one growing into those skill sets.
newzino 1 days ago [-]
[dead]
vb-8448 23 hours ago [-]
It must be refactored: IBM is hopping that juniors(less paid) with AI can be sold as seniors.
K0balt 21 hours ago [-]
Tbh, getting good results from ai requires senior level intuition. You can be rusty as hell and not even middling in the language being used, but you have to understand data structures and architecture more than ever to get non-shit results. If you just vibe it, you’ll eventually end up with a mountain of crap that works sort of, and since you’re not doing the coding, you can’t really figure it out as you go along. Sometimes it can work to naively make a thing and then have it rewritten from scratch properly though, so that might be the path.
neya 19 hours ago [-]
100% accurate. The architect matters so much more than people think. The most common counter argument to this I've seen on reddit are the vibe coders (particularly inside v0 and lovable subreddits) claiming they built an app that makes $x0,000 over a weekend, so who needs (senior) software engineers and the like? A few weeks later, there's almost always a listing for a technical co-founder or a CTO with experience on their careers page or LinkedIn :)))
kenforthewin 18 hours ago [-]
If that's true, it sounds like the vibe coders are winning - they're creating products people want, and pull in technical folks as needed to scale.
neya 16 hours ago [-]
But the argument is not about market validation, the argument is about software quality. Vibe coders love shitting on experienced software folks until their code starts falling apart the moment there is any real world usage.

And about the pulling in devs - you can actually go to indeed.com and filter out listings for co-founders and CTOs. Usually equity only, or barely any pay. Since they're used to getting code for free. No real CTO/Senior dev will touch anything like that.

For every vibe coded product, there's a 100 clones more. It's just a red ocean.

Ancalagon 19 hours ago [-]
This mirrors my experience exactly. Vibe coding straight up does not work for any serious code.
joquarky 17 hours ago [-]
I can't help but feel the only reason to post a comment like this is due to something similar to Cunningham's Law.

https://meta.wikimedia.org/wiki/Cunningham%27s_Law

fragmede 19 hours ago [-]
Todo web apps aren't serious code, I can buy that, but in your mind, what is? Are compilers "serious code"?

https://www.anthropic.com/engineering/building-c-compiler

Like, I'm sure it's just laundering gcc's source at some level, but if Claude can handle making a compiler, either we have to reframe a compiler as "not serious", or, well, come up with a different definition for what entails "serious" code.

K0balt 13 hours ago [-]
Vibe coding doesn’t work for the imbedded system code that I am working on, which includes layered state machines, hardware drivers, and wire level protocol stacks. But supervised AI code generation definitely does work.

You need a highly refined sense of “smell” and intuition about architecture and data design, but if you give good specifications and clear design goals and architectural guidance, it’s like managing a small team but 12x faster iteration.

I sometimes am surprised with feature scope or minor execution details but usually whenever I drill down I’m seeing what I expected to see, even more so than with humans.

If I didn’t have the 4 decades of engineering and management experience I wouldn’t be able to get anything near the quality or productivity.

It’s an ideal tool for seasoned devs with experience shipping with a team. I can do the work of a team of 5 in this type of highly technical greenfield engineering, and I’m shipping better code with stellar documentation… and it’s also a lot less stressful because of the lack of interpersonal dynamics.

But… there’s no way I would give this to a person without technical management experience and expect the same results, because the specification and architectural work is critical, and the ability to see the code you know someone else is writing and understand the mistakes they will probably make if you don’t warn them away from it is the most important skillset here.

In a lot of ways I do fear that we could be pulling up the ladder, but if we completely rethink what it means to be a developer we could teach with an emphasis on architecture, data structures, and code/architecture intuition we might be able to prepare people to step into the role.

Otherwise we will end up with a lot of garbage code that mostly works most of the time and breaks in diabolically sinister ways.

vidarh 18 hours ago [-]
Well, Claude is fixing code generation bugs in my Ruby AOT compiler written in Ruby, and it certainly can't launder any source for that.
rTX5CMRXIfFG 18 hours ago [-]
Ticketing, payroll, point of sale, banking, HFT, e-commerce, warehouse, shipping… how have you not thought of these
fragmede 13 hours ago [-]
The ones I've thought of, and the one's you've thought of, and the ones Ancalagon has in their mind are three partially disjoint sets, but there's probably some intersection, which we can then use as a point of discussion. Given that "serious code" isn't a rigorously defined industry term, maybe you could be less rude?
MattDamonSpace 22 hours ago [-]
Still a wildly different thesis than the “juniors are fucked, ladder’s been raised”
vb-8448 21 hours ago [-]
just to be clear: from my standpoint it's the worst period ever being a junior in tech, you are not "fucked" if you are junior, but hard times are ahead of you.
fragmede 20 hours ago [-]
OTOH, as a junior, you haven't learned all the wrong lessons that don't apply anymore, and you have fewer responsibilities than the seniors.
rTX5CMRXIfFG 18 hours ago [-]
That still doesn’t sound employable
culi 19 hours ago [-]
This case has always been made for juniors but it's almost always the opposite that's true. There's always some fad that the industry is over-indexing on. Senior developers tend to be less susceptible to falling for it but non-technical staff and junior developers are not

Whether its a hotlang, LLMs, or some new framework. Juniors like to dive right in because the promise of getting a competitive edge against people much more experienced than you is too tantalizing. You really want it to be true

ohyoutravel 20 hours ago [-]
Like what
xp84 20 hours ago [-]
Some things take very little time and effort to manifest into the world today that used to take a great deal. So one of the big changes is around whether some things are worth doing at all.

Note: I'm not taking any particular side of the "Juniors are F**d" vs "no they're not" argument.

deadbabe 22 hours ago [-]
IMO I have found that juniors working with AI is basically just like subscribing to an expensive AI agent.
sigmoid10 21 hours ago [-]
IMO with the latest generation (gpt codex 5.3 and claude 4.6) most devs could probably be replaced by AI. They can do stuff that I've seen senior devs fail at. When I have a question about a co-workers project, I no longer ask them and instead immediately let copilot have a look at the repo and it will be faster and more accurate at identifying the root cause of issues than humans who actually worked on the project. I've yet to find a scenario where they fail. I'm sure there are still edge cases, but I'm starting to doubt humans will matter in them for long. At this point we really just need better harnesses for these models, but in terms of capabilities they may as well take over now.
Aurornis 20 hours ago [-]
> most devs could probably be replaced by AI. They can do stuff that I've seen senior devs fail at.

When I read these takes I wonder what kind of companies some of you have been working for. I say this as someone who has been using Opus 4.6 and GPT-Codex-5.3 daily.

I think the “senior developer” title inflation created a bubble of developers who coasted on playing the ticket productive game where even small tasks could be turned into points and sprints and charts and graphs such that busy work looked like a lot of work was being done.

bluGill 21 hours ago [-]
they are good at some weird problems - but also write some really bad code and sometimes come up with wrong answers.
joquarky 17 hours ago [-]
That's why you write tests.
bluGill 9 hours ago [-]
There are whole classes of problems that tests can't catch.
giantg2 22 hours ago [-]
[flagged]
vb-8448 21 hours ago [-]
ehm ... it's basically what all big consultancies have been doing in the last 20 years .. and they made tons of money with this model.
giantg2 21 hours ago [-]
Making money consulting doesn't require positive results.
ChrisArchitect 2 days ago [-]
dang 1 days ago [-]
Thanks - we-ve merged that thread hither.
alienbaby 1 days ago [-]
"software engineers will spend less time on routine coding—and more on interacting with customers"

Ahh, what could possibly go wrong!

Nextgrid 1 days ago [-]
Why is that bad? You write better code when you actually understand the business domain and the requirement. It's much easier to understand it when you get it direct from the source than filtered down through dozens of product managers and JIRA tickets.
SoftTalker 19 hours ago [-]
Engineers and customers often talk past each other. They focus on different things. They use different vocabulary.
jghn 18 hours ago [-]
Only true for engineers who don't want to bother learning this skill. Those engineers are going to start finding themselves left behind.
Insanity 24 hours ago [-]
Not sure why this is being downvoted. It’s spot on imo. Engineers who don’t want to understand the domain and the customers won’t be as effective in an engineering organization as those who do.

It always baffles me when someone wants to only think about the code as if it exists in a vacuum. (Although for junior engineers it’s a bit more acceptable than for senior engineers).

williamcotton 22 hours ago [-]
Isn't it a bit of both? When it comes to noticing whether or not code will be a security nightmare, a performance nightmare, an architectural nightmare, etc, haven't experienced developers already learned to watch out for these issues?
hooch 18 hours ago [-]
Too right. Drilling into the domain from first principles and with critical faculties enabled unlocks so much more value, because the engineer can then see much better ways to solve problems.
johnnyanmac 23 hours ago [-]
We're assuming we all somehow have perfect customers with technical knowledge who know exactly what they want and can express it as such, while gracefully accepting pushback over constraints brought up.

Anyone who's worked in a "bikeshed sensitive" stack of programming knows how quickly things railroad off when such customers get direct access to an engineer. Think being a fullstack dev but you constantly get requests over button colors while you're trying to get the database setup.

whstl 23 hours ago [-]
Dealing with the occasional pushy customers is way easier than dealing with pushy PMs or designers. Which happen to be the majority.

Customers bikeshed WAY less than those two categories.

johnnyanmac 22 hours ago [-]
I'm glad you dealt with some good customers. I can't agree in my experience, though.
whstl 21 hours ago [-]
It's not luck.

Customers want to save money and see projects finished. That anyone can reason with.

Someone inside the company trying to climb the corporate ladder? Different story.

Insanity 18 hours ago [-]
+1, customers want their problem solved but at times they struggle to articulate that.

When a customer starts saying “we need to build X”, first ask what the actual problem is etc. It takes actual effort, and you need to speak their language (understand the domain).

But if you have a PM in the middle, now you just start playing telephone and I don’t believe that’s great for anyone involved.

johnnyanmac 14 hours ago [-]
Okay. I'm glad you're privileged enough to where you can choose your customers. Customers that aren't abusive or otherwise out of their league thinking they know everything just because they have money.

Otherwise, you never feeelanced on the cheap.

whstl 6 hours ago [-]
Calling me "privileged" or "lucky" feels like a cheap attack on my competence.

I am certain that I went through the same problems you did in the past, maybe I just have a different way of dealing with them, or maybe I had even worse problems than you did but I have a different frame of comparison. We never stopped to compared notes.

All I'm saying is: for me dealing with business owners, end-users, CEOs and CTOs was always way easier than dealing with proxies. That's all.

johnnyanmac 2 hours ago [-]
>I am certain that I went through the same problems you did in the past,

And I'm certain you haven't if you really, never wanted a layer of separation between certain clients over behavioral issues that got in the way of the actual work. And I'm still male, so I'm sure I still have it better than certain other experiences I only heard third hand in my industry.

I don't see it as a cheap attack. Any teacher would love to be in a classroom exclusively made up of motivated honors students so they can focus on teaching and nurturing. Instead, most teachers tend to become parental proxies without the authority to actually discipline children. So they see a chair fly and at best they need to hope a principal handles it. But sometimes the kid is back in class the next day.

Its envy more than anything else.

secondcoming 1 days ago [-]
Programmers have an unfortunate tendancy to be too honest!
themafia 22 hours ago [-]
You write more efficient software for the task.

Having had to support many of these systems for sales or automation or video production pipelines as soon as you dig under the covers you realize they are a hot mess of amateur code that _barely_ functions as long as you don't breath on it too hard.

Software engineering is in an entirely nascent stage. That the industry could even put forward ideas like "move fast and break things" is extreme evidence of this. We know how to handle this challenge of deep technical knowledge interfacing with domain specific knowledge in almost every other industry. Coders were once cowboys, now we're in the Upton Sinclair version of the industry, and soon we'll enter into regular honest professional engineering like every other new technology ultimately has.

Insanity 24 hours ago [-]
Customer interaction has imo always been one of the most important parts in good engineering organizations. Delegating that to Product Managers adds unnecessary friction.
forgetfreeman 22 hours ago [-]
Having spent more hours than I care to count struggling to control my facial expressions in client-facing meetings your assertion that that friction is unnecessary is highly questionable. Having a "face man" who's sufficiently tech literate to ask decent questions manage the soft side of client relations frees up a ton of engineering resources that would otherwise be squandered replying to routine emails.
whoisthemachine 23 hours ago [-]
Sounds like we're finally doing agile.
optimalsolver 1 days ago [-]
tl2do 22 hours ago [-]
[flagged]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:05:29 GMT+0000 (Coordinated Universal Time) with Vercel.