On one particular project from 1995 where the hardware was very cost optimised, the C program compiled to 1800 bytes which meant we could save nearly a dollar by buying micro-controllers with 2KB flash rather than 4KB flash. We manufactured 20,000 units with this cheaper chip. 2 years down the line we needed a simple code change to increase the UART baud rate to the host, a change that should have resulted in the same sized binary, but instead increased it to 2300 bytes due to a newer C compiler. We ended up tweaking the assembly file and running an assembler, then praying there would be no more changes!
I have always over specified the micro-controllers a little from that point, and kept a copy of the original dev environment, luckily all my projects are now EOL as I am retired.
Neywiny 1 days ago [-]
Could also just edit the old binary directly in a pinch?
billforsternz 22 hours ago [-]
One of my best rescue jobs involved doing this in 1999, yes that 1999. The client had shuttered their development department years before but was expecting to continue happily supporting and selling their simple enough alarm system products indefinitely. Testing revealed that come 2000 the alarms would just fire continually. Whoops. Fortunately there was one dev PC they'd decided to keep and not touch. Found the offending .c code and the corresponding offending machine code after some disassembly. A little bit of creative assembly language was required to squeeze an extra check in but really no big deal and the day was saved. I remember the client manager being ridiculously happy and grateful.
bartread 1 days ago [-]
Whilst I disapprove of your use of the word "just", which I am strongly of the opinion should be banned in engineering circles...
I have done something similar, albeit in a different context, to fix the behaviour of a poorly performing SQL query embedded in a binary for which the source code was not easily available (as in: it turned out that the version in source control wasn't the version running in production and it would have been quite a lot of work to reverse engineer the production version and retrofit its changes back to the source - and, yes, this is as bad as you think it is).
When I initially suggesting monkey patching the binary there was all manner of screaming and objections from my colleagues but they were eventually forced to concede that it was the pragmatic and sensible thing to do.
sebazzz 17 hours ago [-]
> it turned out that the version in source control wasn't the version running in production and it would have been quite a lot of work to reverse engineer the production version
When I started at my work, a previous software dev with practices more like a mechanic than a software dev didn't use tags and all binaries deployed to production were always the default version 1.0.0.0 of the C# project templates in Visual Studio. To make matters worse, variants of the software were just copy pasted in CVS with their core code checked in as binaries and not their original C# projects. Fun times finding out what actually ran on production, and patching anything in it!
travoc 1 days ago [-]
"luckily all my projects are now EOL as I am retired."
I doubt that everything you ever worked on is end-of-life. Some of it is still out there...
boznz 1 days ago [-]
Correct, I have thousands of tank temperature controllers still out there, still working fine where the End Of Life was 3 years ago. EOL just means support for spares and software updates cannot be guaranteed past that point, and is mainly tied to the EOL of the specific micro-controller used.
Rygian 4 hours ago [-]
It's end-of-life.
If it it still running out there, it's runningin zombie state.
7thpower 1 days ago [-]
Better have kept those environments.
direwolf20 1 days ago [-]
Visual C++ 6 was the first C(++) compiler I used. I'm fairly certain it had auto completion (Intellisense).
Casey Muratori would point out the debugger ran faster on hardware from the era than modern versions run on today's hardware, though I don't have a link to the side–by–side video comparison.
Edit: Casey Muratori showing off the speed of visual studio 6 on a Pentium something after ranting about it: Jump to 36:08 in https://youtu.be/GC-0tCy4P1U — earlier section of the video is how it is today (or when the video was made)
ack_complete 22 hours ago [-]
The VS debugger got an order of magnitude slower in the transition from VS6 to Visual Studio .NET. It's been sped up a bit but is still nowhere near as fast as the VS6 debugger at responding to step commands, debug output, or conditional breakpoints. In VS.NET you could be waiting as long as a full second on a contemporary dev machine for the debugger to finish stepping forward one line.
Funny thing is that at the time, I was lamenting how much slower VC6 was than VC4. Macro playback, for instance, got much slower in VC6. It's all relative.
reactordev 23 hours ago [-]
absolutely! When launching my editor today, it phones home, it checks location, it does everything that an editor SHOULDN'T DO. Not to mention the extensions...
Software today is a horrible bloated mess on top of horrible bloated messes.
Borg3 9 hours ago [-]
I think you were using some 3th party software for auto completion. There was project called Visual Assist with was pretty popular and powerfull tool.
ethin 1 days ago [-]
It's really ironic that this appeared on the front page when it did, because I've spent the last couple days replacing the ZQuake sound system with FMOD and Atmoky TrueSpatial for HRTF and such. This was my first time ever working on a code base from 1996-2000. And in pure C no less. C feels so foreign to me since I'm so used to writing in C++ and Zig and such. But it was still really fun!
And I mean it doesn't seem super impressive, but it's something. Lol
reactordev 23 hours ago [-]
you could have chosen a far worse codebase than Quake from the 90s. Quake was pretty clean in comparison. Sensible use of macros for doing things. A type system that made sense.
Descent on the other hand...
ethin 23 hours ago [-]
I mean... It's ugly from a modern standpoint. Zero encapsulation to speak of. Global state everywhere. But doing the modifications I did were pretty easy all things considered. So yeah, the code is clean, if "ugly" when viewed from a modern standpoint. It was definitely fun to do either way! Idk what other changes I'll make, we'll see. Especially since I don't know the architecture very well yet
reactordev 22 hours ago [-]
Of course! Global state IS game state!
It definitely was an amazing codebase for the time. You didn’t need to get hung up on architecture because it is very singular… it’s just a level, you, and the entities that were created when the level loaded.
There’s no pre-caching, no virtual textures, no shaders (there are materials for later quake 3), it’s just pure load -> set -> loop. The “client” renders, the “server” has the state.
ethin 8 hours ago [-]
Honestly I've been wondering if I could do this with Ezquake or similar (and how hard it would be). Problem is that Ezquake has VoIP too and a bunch of other things (and is for whatever reason stuck on SDL2 (?)), so that might be quite annoying to do. And I don't know if they'd accept my contributions. Worth a try though I suppose?
janosch9001 16 hours ago [-]
I tried to capture that specific VS 6.0 vibe with a text editor color theme I made, Studio 98 (for Visual Studio Code and Vim/Neovim). I actually sampled the hex values from Fabien Sanglard’s Quake/NT 4.0 blog screenshots to update it.
Almost certainly - every other of his books has been telegraphed by articles about the work he’s doing to get the original setup built and running.
nyarlathotep_ 22 hours ago [-]
This was my (hopeful) first thought on seeing this; his recent posts have been Quake-related. I do hope this is a harbinger of another installment. His others have been excellent.
torh 1 days ago [-]
I hope so. The other books have been great fun to read, with the detour of CP-SYSTEM as a nice surprise.
dajt 16 hours ago [-]
I used Visual C in the early 90s and it was a dream compared to vi and whatever C compiler the various unices I was using had.
pjmlp 15 hours ago [-]
That was already the case when comparing the Borland compilers for MS-DOS, and Windows 3.x.
Hence why I eventually found refuge in XEmacs, and DDD, until IDEs like KDevelop and Sun Forte came to be.
bluedino 1 days ago [-]
I'd like to see someone build the Linux source code leak that came out not to far after Quake was released.
yjftsjthsd-h 1 days ago [-]
What do you mean, "leak"? Linux would have been developed in the open?
> (Visual Studio 6) I never used it but it must have felt like a dream at the time.
I used it in the mid-90's and yes, it was eye opening. On the other hand, I was an Emacs user in uni, and by studying a bit the history of Emacs (especially Lucid Emacs) I came to understand that the concepts in Visual Studio were nothing new.
On the third hand, I hated customizing Emacs, which did not have "batteries included" for things like "jump to definition", not to mention a package manager. So the only times in the late-90s I got all the power of modern IDEs was when I was doing something that needed Windows and Visual Studio.
knorker 1 days ago [-]
> The first batches of Quake executables, quake.exe and vquake.exe were programmed on HP 712-60 running NeXT and cross-compiled with DJGPP running on a DEC Alpha server 2100A.
Is that accurate? I thought DJGPP only ran on and for PC compatible x86. ID had Alpha for things like running qbps and light and vis (these took for--ever to run, so the alpha SMP was really useful), but for building the actual DOS binaries, surely this was DJGPP on x86 PC?
Was DJGPP able to run on Alpha for cross compilation? I'm skeptical, but I could be wrong.
This is when they (or at least Carmack) was doing development on Next? So were those the DOS builds?
qingcharles 1 days ago [-]
I thought the same thing. There wouldn't be a huge advantage to cross-compiling in this instance since the target platform can happily run the compiler?
frumplestlatz 1 days ago [-]
Running your builds on a much larger, higher performance server — using a real, decent, stable multi-user OS with proper networking — is a huge advantage.
knorker 1 days ago [-]
Yes, but the gains may be lost in the logistics of shipping the build binary back to the PC for actual execution.
An incremental build of C (not C++) code is pretty fast, and was pretty fast back then too.
In q1source.zip this article links to is only 198k lines spread across 384 files. The largest file is 3391 lines. Though the linked q1source.zip is QW and WinQuake, so not exactly the DJGPP build. (quote the README: "The original dos version of Quake should also be buildable from these
sources, but we didn't bother trying").
It's just not that big a codebase, even by 1990s standards. It was written by just a small team of amazing coders.
I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem. C is just really fast to build. Even back in, was it 1997, when the source code was found laying around on an ftp server or something: https://www.wired.com/1997/01/hackers-hack-crack-steal-quake...
pdw 22 hours ago [-]
"Shipping" wouldn't be a problem, they could just run it from a network drive. Their PCs were networked, they needed to test deathmatches after all ;)
And the compilation speed difference wouldn't be small. The HP workstations they were using were "entry level" systems with (at max spec) a 100MHz CPU. Their Alpha server had four CPUs running at probably 275MHz. I know which system I would choose for compiles.
knorker 14 hours ago [-]
> "Shipping" wouldn't be a problem, they could just run it from a network drive.
This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.
> just run it from a network drive.
It still needs to be transferred to run.
> I know which system I would choose for compiles.
All else equal, perhaps. But were you actually a developer in the 90s?
Borg3 9 hours ago [-]
Whats the problem? 1997? They were probably using 10BaseTX network, its 10Mbit...
Using Novel Netware would allow you to trasnfer data at 1MB/s.. quake.exe is < 0.5MB.. so trasnfer will take around 1 sec..
knorker 4 hours ago [-]
Not sure what you mean by "problem". I said miniscule cancels out miniscule.
frumplestlatz 2 hours ago [-]
Networking in that era was not a problem. I also don’t know why you’re so steadfast in claiming that builds on local PCs were anything but painfully slow.
It’s also not just a question of local builds for development — people wanted centralized build servers to produce canonical regular builds. Given the choice between a PC and large Sun, DEC, or SGI hardware, the only rational choice was the big iron.
To think that local builds were fast, and that networking was a problem, leads me to question either your memory, whether you were there, or if you simply had an extremely non-representative developer experience in the 90s.
frumplestlatz 22 hours ago [-]
> I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem.
I never had cause to build quake, but my Linux kernel builds took something like 3-4 hours on an i486. It was a bit better on the dual socket pentium I had at work, but it was still painfully slow.
I specifically remember setting up gcc cross toolchains to build Linux binaries on our big iron ultrasparc machines because the performance difference was so huge — more CPUs, much faster disks, and lots more RAM.
That gap disappeared pretty quickly as we headed into the 2000s, but in 1997 it was still very large.
RupertSalt 9 hours ago [-]
I remember two huge speedups back in the day: `gcc -pipe` and `make -j`.
`gcc -pipe` worked best when you had gobs of RAM. Disk I/O was so slow, especially compared to DRAM, that the ability to bypass all those temp file steps was a god-send. So you'd always opt for the pipeline if you could fill memory.
`make -j` was the easiest parallel processing hack ever. As long as you had multiple CPUs or cores, `make -j` would fill them up and keep them all busy as much as possible. Now, you could place artificial limits such as `-j4` or `-j8` if you wanted to hold back some resources or keep interactivity. But the parallelism was another god-send when you had a big compile job.
It was often a standard but informal benchmark to see how fast your system could rebuild a Linux kernel, or a distro of XFree86.
knorker 14 hours ago [-]
> Linux kernel builds took something like 3-4 hours on an i486
From cold, or from modified config.h, sure. But also keep in mind that the Pentium came out in 1993.
ece 23 hours ago [-]
VC++ 6 was made by the same company that made VS Code.
dajt 16 hours ago [-]
But not the same people or culture.
pjmlp 15 hours ago [-]
Exactly, VSCode is done by well known people from the GoF book, Visual Age and Eclipse IDEs.
ErroneousBosh 1 days ago [-]
Funny, I've just been (re-)playing Quake 2 recently.
bombcar 1 days ago [-]
Action Quake II is still the best I’ve ever been at FPS.
wink 4 hours ago [-]
Same here. Clan wars on the ESL, visiting people for LAN parties, all around a good time.
jlundberg 1 days ago [-]
Yes, such a good game! :)
Gonna warm that up when the kids get a bit older and we start doing LAN parties.
That and Quake World Team Fortress.
jasonb05 1 days ago [-]
Nod. AQ2 was so damn fun!!
ethin 1 days ago [-]
I've only played Quake I (and a modified version of it at that which had accessibility features). I did purchase quake II and III from Steam a few years ago, but it's much harder to play them because they have no accessibility to speak of (and I'm not entirely certain where to begin to try to replicate what was done with my version of Quake I). Quake in general has always been an insanely fun game for me, and I started playing it in like 2010. I still love playing it even now because it's got something to it that most other games I have just lack. Don't ask me to explain what it is because I can't really put it into words but...
hypercube33 22 hours ago [-]
qpong, generations mod, catch the chicken, red rover, weapons of mass destruction, 4 way ctf, freeze tag....so many good mods
ErroneousBosh 1 days ago [-]
I'd kind of forgotten about AQ2. I wonder if I can get that going.
I bet there are still servers out there, at that.
gatane 23 hours ago [-]
Last time I played, the servers were empty. Maybe you could have better luck by finding their discord server...
there was another article where someone bootstrapped the very first version of gcc that had the i386 backend added to it, and it turns out there was a bug in the codegen. I'll try to find it...
EDIT: Found in, infact there was a HN discussion about an article referencing the original article:
If you're into that kind of thing, May I strongly suggest https://virtuallyfun.com/ ? It's an absolute gold mine of amazing stuff along these lines
clarity_hacker 1 days ago [-]
[flagged]
kelnos 1 days ago [-]
> The detail about needing to reinstall Windows NT just to add a second CPU shows how tightly coupled OS and hardware were — there was no abstraction layer pretending otherwise.
In this case there was: the reason you need to reinstall to go from uniprocessor to SMP was because NT shipped with two HALs (Hardware Abstarction Layer): one supporting just a single processor, and one supporting more than one.
The SMP one had all the code for things like CPU synchronization and interrupt routing, while the UP one did not.
If they'd packed everything into one HAL, single-processor systems would have to take the performance hit of all the synchronization code even though it wasn't necessary. Memory usage would be higher too. I expect that you probably could run the SMP HAL on a UP system (unless Microsoft put extra code in to make it not let you), but you wouldn't really want to do that, as it would be slower and require more RAM.
So it wasn't that those abstraction layers didn't exist back then. It was that abstraction layers can be expensive. This is still true today, of course, but we have the cycles and memory to spare, more or less, which was very much not the case then.
Sesse__ 1 days ago [-]
> If they'd packed everything into one HAL, single-processor systems would have to take the performance hit of all the synchronization code even though it wasn't necessary. Memory usage would be higher too.
Linux also used to be like this, but these days has unified MP/UP kernels; on single-CPU systems (or if you give nosmp), the extra code is patched away at boot time. It wouldn't have been an unheard of technique at the time.
kccqzy 1 days ago [-]
I actually would love this to be built in to a language/compiler. A lot of times when I’m building a single-threaded program but I’m using libraries written by other people. These libraries don’t know whether they are being incorporated into programs with single thread or not. So they either take the performance penalty of assuming multi-threaded (the approach by std::shared_ptr) or they give callers choice by making two implementations (Rust Arc and Rc). But the latter doesn’t actually work because this needs to be a global setting, not just a decision made at a local call site. It won’t work if such a library is a transitive dependency.
do_not_redeem 1 days ago [-]
Zig supports this. If you compile with -fsingle-threaded, operations on mutexes turn into nops, atomics become simple loads/stores, etc.
mananaysiempre 23 hours ago [-]
Glibc has a bunch of tests throughout the codebase where it checks if there have been any threads started besides the main one. I don’t really know how effective they are from a performance perspective. (In principle, turning fgetc into getc_unlocked, for instance, could be quite beneficial.) Microsoft used to have a single-threaded C runtime, but it was done away with some time ago, I’m guessing because they started putting things into the platform that would start and manage random threads outside the programmer’s control.
vintermann 1 days ago [-]
Linux also used to have separate SMP kernels back when multi processor systems were rare.
amluto 1 days ago [-]
I’m pretty sure that the SMP kernel would boot on UP and vice versa, though.
bombcar 20 hours ago [-]
Yes but a SMP kernel on a UP system would be slightly but noticeably slower. And a UP kernel on an SMP system wouldn’t use the second processor - and rarely, wouldn’t boot.
But even in the era of LILO you could switch kernels pretty easily.
amluto 1 days ago [-]
They could have shipped both HALs. Or made it easy to switch which one was in use without reinstalling.
CDs were around and hard drives weren’t that small at the time. (Or maybe the really early SMP versions predated widespread availability of CD-ROMs, but I remember dealing with this nonsense and reinstalling from an MSDN CD set.)
flomo 1 days ago [-]
With NT4, I'm pretty sure both HALs were on the CD-ROM (unless you had an exotic system with a custom HAL, which came with its own install media). Keep in mind your use case is approximately nobody, you either had a SMP system or you didn't.
amluto 1 days ago [-]
It was really not that rare to want to move a disk from one system to another. Except that there was an obnoxiously high chance that Windows would refuse to boot.
flomo 16 hours ago [-]
Yeah, remember the primary disk controller was set in the registry. (And on an SMP system probably some specific SCSI.) You could fix that, but easier to reinstall. Of all the things to bitch about, this one seems strained.
bombcar 20 hours ago [-]
That desire was much stronger on the consumer side which wasn’t really into NT 4 - though I do recall running W2K instead of XP for a few years.
sincerely 1 days ago [-]
Man, I feel like this is the only type of comment I'm leaving these days, but is this account just posting AI generated comments?
bombcar 20 hours ago [-]
A HN account with a posting history is valuable, apparently.
22 hours ago [-]
webdevver 1 days ago [-]
there is something to be said about old windows installation CDs being essentially modern-day equivalents of immutable docker layers - i don't think one could say that about modern windows, but then i'm not super clued in into ms stuff.
dajt 15 hours ago [-]
With all the problems that recent Windows updates are causing, and a blog post about how the Windows team are using Native React to deliver changes to the apps such as a part of Settings outside the usual updates I got to thinking how great it was back in the Windows 3.11, 95, and XP days when you got Windows and it mostly worked and it didn't get updated (aka more broken) every day. It was quick enough, it was yours, and it didn't tell you what to do.
You'd reinstall every year or two to clean out the disused DLLs etc, but it was mostly fine.
Of course it wasn't exposed to quite the same hostile environment it is today.
Rendered at 22:32:19 GMT+0000 (Coordinated Universal Time) with Vercel.
I have always over specified the micro-controllers a little from that point, and kept a copy of the original dev environment, luckily all my projects are now EOL as I am retired.
I have done something similar, albeit in a different context, to fix the behaviour of a poorly performing SQL query embedded in a binary for which the source code was not easily available (as in: it turned out that the version in source control wasn't the version running in production and it would have been quite a lot of work to reverse engineer the production version and retrofit its changes back to the source - and, yes, this is as bad as you think it is).
When I initially suggesting monkey patching the binary there was all manner of screaming and objections from my colleagues but they were eventually forced to concede that it was the pragmatic and sensible thing to do.
When I started at my work, a previous software dev with practices more like a mechanic than a software dev didn't use tags and all binaries deployed to production were always the default version 1.0.0.0 of the C# project templates in Visual Studio. To make matters worse, variants of the software were just copy pasted in CVS with their core code checked in as binaries and not their original C# projects. Fun times finding out what actually ran on production, and patching anything in it!
I doubt that everything you ever worked on is end-of-life. Some of it is still out there...
If it it still running out there, it's runningin zombie state.
Casey Muratori would point out the debugger ran faster on hardware from the era than modern versions run on today's hardware, though I don't have a link to the side–by–side video comparison.
Edit: Casey Muratori showing off the speed of visual studio 6 on a Pentium something after ranting about it: Jump to 36:08 in https://youtu.be/GC-0tCy4P1U — earlier section of the video is how it is today (or when the video was made)
Funny thing is that at the time, I was lamenting how much slower VC6 was than VC4. Macro playback, for instance, got much slower in VC6. It's all relative.
Software today is a horrible bloated mess on top of horrible bloated messes.
And I mean it doesn't seem super impressive, but it's something. Lol
Descent on the other hand...
It definitely was an amazing codebase for the time. You didn’t need to get hung up on architecture because it is very singular… it’s just a level, you, and the entities that were created when the level loaded.
There’s no pre-caching, no virtual textures, no shaders (there are materials for later quake 3), it’s just pure load -> set -> loop. The “client” renders, the “server” has the state.
Link: https://github.com/jnz/studio98
Can't fix the Electron sluggishness compared to VS6, but at least the syntax highlighting feels a bit like home.
There's also an easy fix: https://github.com/krystalgamer/spidey-decomp/blob/ad49c0f5f...
Hence why I eventually found refuge in XEmacs, and DDD, until IDEs like KDevelop and Sun Forte came to be.
I used it in the mid-90's and yes, it was eye opening. On the other hand, I was an Emacs user in uni, and by studying a bit the history of Emacs (especially Lucid Emacs) I came to understand that the concepts in Visual Studio were nothing new.
On the third hand, I hated customizing Emacs, which did not have "batteries included" for things like "jump to definition", not to mention a package manager. So the only times in the late-90s I got all the power of modern IDEs was when I was doing something that needed Windows and Visual Studio.
Is that accurate? I thought DJGPP only ran on and for PC compatible x86. ID had Alpha for things like running qbps and light and vis (these took for--ever to run, so the alpha SMP was really useful), but for building the actual DOS binaries, surely this was DJGPP on x86 PC?
Was DJGPP able to run on Alpha for cross compilation? I'm skeptical, but I could be wrong.
Edit: Actually it looks like you could. But did they? https://www.delorie.com/djgpp/v2faq/faq22_9.html
There is also an interview of Dave Tayor explicitly mentioning compiling Quake on the Alpha in 20s (source: https://www.gamers.org/dhs/usavisit/dallas.html#:~:text=comp... I don't think he meant running qbsp or vis or light.
This is when they (or at least Carmack) was doing development on Next? So were those the DOS builds?
An incremental build of C (not C++) code is pretty fast, and was pretty fast back then too.
In q1source.zip this article links to is only 198k lines spread across 384 files. The largest file is 3391 lines. Though the linked q1source.zip is QW and WinQuake, so not exactly the DJGPP build. (quote the README: "The original dos version of Quake should also be buildable from these sources, but we didn't bother trying").
It's just not that big a codebase, even by 1990s standards. It was written by just a small team of amazing coders.
I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem. C is just really fast to build. Even back in, was it 1997, when the source code was found laying around on an ftp server or something: https://www.wired.com/1997/01/hackers-hack-crack-steal-quake...
And the compilation speed difference wouldn't be small. The HP workstations they were using were "entry level" systems with (at max spec) a 100MHz CPU. Their Alpha server had four CPUs running at probably 275MHz. I know which system I would choose for compiles.
This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.
> just run it from a network drive.
It still needs to be transferred to run.
> I know which system I would choose for compiles.
All else equal, perhaps. But were you actually a developer in the 90s?
It’s also not just a question of local builds for development — people wanted centralized build servers to produce canonical regular builds. Given the choice between a PC and large Sun, DEC, or SGI hardware, the only rational choice was the big iron.
To think that local builds were fast, and that networking was a problem, leads me to question either your memory, whether you were there, or if you simply had an extremely non-representative developer experience in the 90s.
I never had cause to build quake, but my Linux kernel builds took something like 3-4 hours on an i486. It was a bit better on the dual socket pentium I had at work, but it was still painfully slow.
I specifically remember setting up gcc cross toolchains to build Linux binaries on our big iron ultrasparc machines because the performance difference was so huge — more CPUs, much faster disks, and lots more RAM.
That gap disappeared pretty quickly as we headed into the 2000s, but in 1997 it was still very large.
`gcc -pipe` worked best when you had gobs of RAM. Disk I/O was so slow, especially compared to DRAM, that the ability to bypass all those temp file steps was a god-send. So you'd always opt for the pipeline if you could fill memory.
`make -j` was the easiest parallel processing hack ever. As long as you had multiple CPUs or cores, `make -j` would fill them up and keep them all busy as much as possible. Now, you could place artificial limits such as `-j4` or `-j8` if you wanted to hold back some resources or keep interactivity. But the parallelism was another god-send when you had a big compile job.
It was often a standard but informal benchmark to see how fast your system could rebuild a Linux kernel, or a distro of XFree86.
From cold, or from modified config.h, sure. But also keep in mind that the Pentium came out in 1993.
Gonna warm that up when the kids get a bit older and we start doing LAN parties.
That and Quake World Team Fortress.
I bet there are still servers out there, at that.
https://q2online.net/action
https://store.steampowered.com/app/1978800/AQtion/
there was another article where someone bootstrapped the very first version of gcc that had the i386 backend added to it, and it turns out there was a bug in the codegen. I'll try to find it...
EDIT: Found in, infact there was a HN discussion about an article referencing the original article:
https://miyuki.github.io/2017/10/04/gcc-archaeology-1.html
https://news.ycombinator.com/item?id=39901290
In this case there was: the reason you need to reinstall to go from uniprocessor to SMP was because NT shipped with two HALs (Hardware Abstarction Layer): one supporting just a single processor, and one supporting more than one.
The SMP one had all the code for things like CPU synchronization and interrupt routing, while the UP one did not.
If they'd packed everything into one HAL, single-processor systems would have to take the performance hit of all the synchronization code even though it wasn't necessary. Memory usage would be higher too. I expect that you probably could run the SMP HAL on a UP system (unless Microsoft put extra code in to make it not let you), but you wouldn't really want to do that, as it would be slower and require more RAM.
So it wasn't that those abstraction layers didn't exist back then. It was that abstraction layers can be expensive. This is still true today, of course, but we have the cycles and memory to spare, more or less, which was very much not the case then.
Linux also used to be like this, but these days has unified MP/UP kernels; on single-CPU systems (or if you give nosmp), the extra code is patched away at boot time. It wouldn't have been an unheard of technique at the time.
But even in the era of LILO you could switch kernels pretty easily.
CDs were around and hard drives weren’t that small at the time. (Or maybe the really early SMP versions predated widespread availability of CD-ROMs, but I remember dealing with this nonsense and reinstalling from an MSDN CD set.)
You'd reinstall every year or two to clean out the disused DLLs etc, but it was mostly fine.
Of course it wasn't exposed to quite the same hostile environment it is today.