NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Optimize for change not application performance (echooff.dev)
po1nt 1 days ago [-]
Author fails to acknowledge that there are many fields where we ship only once and we should strive towards that if we want to avoid running firmware updates on our ultrasonic knives.

While we talk about maintainability, we all admire Fast Inverse Square algorithm.

Optimize for what best serves your purpose. If you have high team fluctuation, optimize for readability. If you develop a spacecraft, optimize for safety. If you ship audio gear, optimize for latency.

lo1tuma 2 hours ago [-]
Very true. I think the authors area settles more in the webdev area, where you can make releases multiple times per day.

> Optimize for what best serves your purpose. If you have high team fluctuation, optimize for readability. If you develop a spacecraft, optimize for safety. If you ship audio gear, optimize for latency.

I fully agree with that.

account42 1 days ago [-]
> If you have high team fluctuation, optimize for readability.

Or better: If you have high team fluctuation, optimize that first so your team is actually effective.

po1nt 23 hours ago [-]
You can't fix faulty management as a developer. You can structure a code base around it.
AlotOfReading 21 hours ago [-]
A traditional engineer can't force purchasing to buy the right parts, but that doesn't mean they should make do by ducttaping sheet metal onto the reactor as a substitute. Technical workarounds are a poor solution to social programs.
0123456789ABCDE 22 hours ago [-]
but do we care, if management doesn't?
po1nt 20 hours ago [-]
I do take pride in my work. Keeps me from going insane and switching careers to take on leatherworking. Even if I have to fight the uphill battle with incompetency.
bunderbunder 21 hours ago [-]
I do wonder if sometimes these things are set up as false dilemmas, though.

I skimmed through NASA’s coding manual a while back, and one of the things that I took away from it was that optimizing for readability is optimizing for safety.

It’s just that it’s hard for me to see it as readability because I’m not familiar with the problem domain. For example, their ban on reentrancy would definitely require me to rewire my brain a bit. But, for what they are doing, that is a readability decision: they needed to be able to guarantee that a spacecraft’s firmware couldn’t experience a stack overflow, and reentrant code makes it much harder to reason about stack growth.

po1nt 21 hours ago [-]
Removing null pointer guards would improve readability, not safety. Removing hashing on passwords would also improve readability and made the program easier to debug.
doctorpangloss 21 hours ago [-]
> we all admire Fast Inverse Square algorithm.

i don't. that guy basically made the same game over and over again, while nearly everyone else was innovating in game design, reaching new audiences, etc. that's what change is about!

and then, he blows up the next thing he's put in charge of (VR), and blames everyone but himself. how many billions did he get and he couldn't figure it out? every bit of ethos from that guy was bad, it's not just the one little ethos of the hardcore little optimization algorithm, it's every ethos.

po1nt 21 hours ago [-]
Quake III was widely successful. John Carmack (probably not author of the algo anyway) later open sourced the game 6 years later which is admirable. Regarding his VR achievements, he made VR viable and not just a military project. I actually like John Carmack and I see him as one of the OG garage programmer.

I personally think he was enormously successful in VR, just not in a market sense but in a technology and UX sense. In my opinion VR is still yet to come to a mass adoption, just not in the "second life" fantasy, but in the "replacement for workspace monitors" kind of way.

KptMarchewa 22 hours ago [-]
> ultrasonic knives

Wow, TIL.

kikimora 22 hours ago [-]
If done right optimizing for performance also achieves readability and maintenance. There is an edge case when you rewrite a loop with SIMD or use branch less programming. It is so rare but a focus of so many articles.

I do see a lot of system that are both slow and hard to maintain because people focus on maintenance. They create abstractions upon abstractions in the name of maintainability to later find it does not work well with their hardware and infrastructure prompting more complexity in the name of performance.

lo1tuma 2 hours ago [-]
I agree. Over-engineering is something many projects suffer from. Principles like YAGNI and KISS help, but it is hard to enforce them by tooling. So it remains a discipline of the engineers.
bunderbunder 21 hours ago [-]
I’ve never known towering abstractions to be good for maintainability, anyway. It sounds great on paper, but in practice it often ends up being extra mechanism to have to think your way through on your way to understanding a problem. Or they constrain the set of possible solutions you can undertake without major refactoring.

That isn’t to say abstractions are inherently harmful. But when I see codebases that really go nuts for it, it’s rarely the case that they were all carefully considered before implementation.

nijave 22 hours ago [-]
Nothing like waiting 20 minutes for a test suite that should have taken 2
lo1tuma 2 hours ago [-]
Yep, fast feedback cycles are important.
moebrowne 21 hours ago [-]
I agree with the overall sentiment of this post but I'm not sure about this:

> A codebase that is easy to maintain becomes easier to optimize.

I think optimization and maintainability are often at odds. The only example which comes to mind is loop unrolling, increases performance, decreases maintainability.

lo1tuma 2 hours ago [-]
I assume that the author means, when it is easy to change something in the code (i.e. without fear because you have a very good test automation), then it is easy to apply changes like performance optimizations.

Things like loop unrolling is probably something I wouldnt do by hand on the source code, I would probably write a script that transforms the code automatically, so the original source code stays readable.

lo1tuma 4 days ago [-]
I mostly agree with the author that optimizing a code base for change should be the number 1 priority, but I think it is different topic than for example application performance. And it is not an either-or ... you can actually do both, the question - as always - is if you should do it all.

- Optimizing for change is basically the key principle of agility. Too ofter it is confused by many people with being fast in delivery by default, just because you apply agile patterns. This is not true. You can be faster than e.g. with waterfall, but most of the time you will be slower. But that is not the point. The point is you can adapt the plan very quickly. So instead of following strictly a 6 months plan, you can change plans on a daily basis and go in completely different direction, if business demands that.

- Application performance is actually not a "tech" thing. So I dont understand why so many developers pre-optimize for application performance without being asked to do so. Application performance is part of UX (User experience). There are studies out there, that sometimes it is even benefitial to be slow and show a loading indicator because it could increase trust from users, because they think "Hey look... the application is calculating something to fullfil my needs", instead of showing the answer instantly. In any case, Application perfomance should be driven by business and user needs, not by engineers who have a personal obligation to do this. And furthermore application performance should never be optimized blindly. Always benchmark the application and work on the bottleneck only.

account42 1 days ago [-]
> There are studies out there, that sometimes it is even benefitial to be slow and show a loading indicator because it could increase trust from users, because they think "Hey look... the application is calculating something to fullfil my needs", instead of showing the answer instantly.

Users being susceptible to dark patterns doesn't mean that dark patterns are something an engineer should see as acceptable.

> Always benchmark the application and work on the bottleneck only.

That's how you end up with software that's slow due to a million abstractions. Easily bench-marked bottlenecks can give you quick wins, but that doesn't mean you should stop there or not have any foresight to optimize things ahead of time where it makes sense. Your cost benefit calculation also needs to take into account that optimizations decisions (both architecture and lower implementation details) are much more costly to do after the code has already been written, which is why with today's YOLO software they often don't get done at all.

lo1tuma 2 hours ago [-]
> Users being susceptible to dark patterns doesn't mean that dark patterns are something an engineer should see as acceptable.

I think it is not even an engineers job to define the UX/UI. I would rather consult an expert on that matter. But from my personal experience I can relate to that. When I see applications that seem to be too performant I often get the feeling that they might be phishing pages because they don’t do any actual work under the hood. So my first instinct is to avoid those kind of pages, even though it might be legit.

> That's how you end up with software that's slow due to a million abstractions.

How is an abstraction related to benchmarking? Those are too completely distinct topics.

> Easily bench-marked bottlenecks can give you quick wins

They can... but most of the time it is not a quick win. But the question is not about if it is quick win or not. The only thing that matters is if your are optimizing on the bottleneck or on something that has no measurable impact. Without benchmarking you are blind.

> but that doesn't mean you should stop there or not have any foresight to optimize things ahead of time where it makes sense

Yes, it does. When the bottleneck is gone, and the performance becomes good enough there is no further need to optimize. I am happily quoting Donald Knuth here "Premature optimization is the root of all evil."

> Your cost benefit calculation also needs to take into account that optimizations decisions

This is true. But it doesn’t only apply to optimizations. This applies to any kind of change you want to make to the code. So this has also been to considered when you want to build a new feature. The answer to that is: Optimize for change. Which is basically the fundamental idea working in an agile way, but most people don’t do this correctly. Optimize for change means you need a lot of test automation and clean code, so you can make any kind of code or architecture change quickly with low cost and low risk and without fear. I am practicing this since over a decade now and it works pretty good.

201984 22 hours ago [-]
>There are studies out there, that sometimes it is even benefitial to be slow and show a loading indicator because it could increase trust from users,

And I as a user absolutely hate programs that do this. Put an "updated" message with a timestamp if you want, but don't pointlessly waste my time.

lo1tuma 3 hours ago [-]
Yes, there is probably no UX/UI approach that makes 100% of the users happy.
hotfrost 22 hours ago [-]
AI slop article with a few words highlighted in color or bold..
lo1tuma 3 hours ago [-]
Nope. Not AI, its written by a human https://github.com/screendriver/echooff.dev
locknitpicker 1 days ago [-]
This blog post reads like AI slop.

I doubt that the author even read the result, as it's readability is subpar. In general AI slop is more readable than this soup of bullet points.

This feels like eternal September, but powered by LLMs.

lo1tuma 3 hours ago [-]
Wait... can you decide for something? Is it now AI Slop or not AI slop because of the soup of bullet points which doesn’t look like AI?
joaohaas 22 hours ago [-]
Welcome to modern HN.
add-sub-mul-div 23 hours ago [-]
It's a new account that has only spammed the site with submissions from this one domain that no one else has ever submitted. This, along with it being slop, is becoming the default submission profile.
lo1tuma 3 hours ago [-]
Yes, I have registered on HN exactly for the purpose of sharing and responding to one of the articles. For full disclosure the blog author is a former coworker and I read his blog frequently. To some articles I want to respond and I think HN discussions are the best platform for this.
thesuperevil 22 hours ago [-]
[flagged]
asn_tech_2019 22 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 11:22:26 GMT+0000 (Coordinated Universal Time) with Vercel.