NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
LT6502: A 6502-based homebrew laptop (github.com)
vardump 1 days ago [-]
I sometimes wonder what the alternate reality where semiconductor advances ended in the eighties would look like.

We might have had to manage with just a few MB of RAM and efficient ARM cores running at maybe 30 MHz or so. Would we still get web browsers? How about the rest of the digital transformation?

One thing I do know for sure. LLMs would have been impossible.

ksherlock 7 hours ago [-]
There is an alternate reality where semiconductor advances ended in the eighties. It's called 1990.

Anyhow, the WWW was invented in 1989/1990 on a 25Mhz 68040 NextCube. Strictly speaking, the 68040 and NextCube weren't released until 1990 (and the NeXT was an expensive machine) but they were in development in 1989 so that's not a stretch. Anyhow, WWW isn't really much more than hypercard (1987) with networking.

cosmic_cheese 1 days ago [-]
For me the interesting alternate reality is where CPUs got stuck in the 200-400mhz range for speed, but somehow continued to become more efficient.

It’s kind of the ideal combination in some ways. It’s fast enough to competently run a nice desktop GUI, but not so fast that you can get overly fancy with it. Eventually you’d end up OSes that look like highly refined versions of System 7.6/Mac OS 8 or Windows 2000, which sounds lovely.

antidamage 1 days ago [-]
I loved System 7 for its simplicity yet all of the potential it had for individual developers.

Hypercard was absolutely dope as an entry-level programming environment.

cosmic_cheese 1 days ago [-]
The Classic Mac OS model in general I think is the best that has been or ever will be in terms of sheer practical user power/control/customization thanks to its extension and control panel based architecture. Sure, it was a security nightmare, but there was practically nothing that couldn’t be achieved by installing some combination of third party extensions.

Even modern desktop Linux pales in comparison because although it’s technically possible to change anything imaginable about it, to do a lot of things that extensions did you’re looking at at minimum writing your own DE/compositor/etc and at worst needing to tweak a whole stack of layers or wade through kernel code. Not really general user accessible.

Because extensions were capable of changing anything imaginable and often did so with tiny-niche tweaks and all targeted the same system, any moderately technically capable person could stack extensions (or conversely, disable system-provided ones which implemented a lot of stock functionality) and have a hyper-personalized system without ever writing a line of code or opening a terminal. It was beautiful, even if it was unstable.

Someone 12 hours ago [-]
> The Classic Mac OS model in general I think is the best that has been or ever will be in terms of sheer practical user power/control/customization

A point for discussion is whether image-based systems are the same kind of thing as OSes where system and applications are separate things, but if we include them, Smalltalk-80 is better in that regard. It doesn’t require you to reboot to install a new version of your patch (if you’re very careful, that’s sometimes possible in classic Mac OS, too, but it definitely is harder) and is/has an IDE that fully supports it.

Lisp systems and Self also have better support for it, I think.

nxobject 23 hours ago [-]
I’m not too nostalgic for an OS that only had cooperative scheduling. I don’t miss the days of Conflict Catcher, or having to order my extensions correctly. Illegal instruction? Program accessed a dangling pointer? Bomb message held up your own computer and you had to restart (unless you had a non-stock debugger attached and can run ExitToShell, but no promises there.)
cosmic_cheese 23 hours ago [-]
It had major flaws for sure, but also some excellent concepts that I wish could've found a way to survive through to the modern day. Modern operating systems may be stable and secure, but they're also far more complex, inflexible, generic, and inaccessible and don't empower users to anywhere near the extent they could.
Someone 12 hours ago [-]
> unless you had a non-stock debugger attached and can run ExitToShell

You could also directly jump into the ExitToShell code in ROM (G 49F6D8, IIRC). Later versions of Minibug had an “es” command that more or less did the same thing (that direct jump always jumps into the ROM code, “es” would, I think, jump to any patched versions)

rbanffy 13 hours ago [-]
> Not really general user accessible.

Writing a MacOS classic extension wasn’t exactly easy. Debugging one could be a nightmare.

I’m not sure how GTK themes are done now, but they used to be very easy to make.

cosmic_cheese 8 hours ago [-]
Right, but my point is that users didn’t have to write extensions because developers had already written one for just about any niche use one could think of.

And it wasn’t just theming. Classic Mac OS extensions could do anything from add support for new hardware to overhaul the text rendering system entirely to giving dragged desktop icons gravity and inertia to adding a taskbar or a dock. The sky was the limit, and having a single common target to do any of those things (vs. being split between the kernel and a thousand layers/daemons/DEs/etc) meant that if it could be done, it probably had been.

rbanffy 12 minutes ago [-]
You’d need to touch many different parts of the OS to write those extensions. The difference is that, on MacOS classic, there isn’t much of a boundary between user space and kernel space.

I’ve done a couple MITM toys with Windows 3.x and the trick is always exposing the same interface as the thing you want to replace, even if you only do something like inverting mouse movements on odd minutes, you just pass everything else down to the original module.

bee_rider 20 hours ago [-]
I sometimes drop by cpu down to the 400Mhz-800Mhz range. 400 is rough. 800, not so bad. It runs fine, with something like i3 or sway.

If we really got stuck in the hundreds of MHz range, I guess we’d see many-core designs coming to consumers earlier. Could have been an interesting world.

Although, I think it would mostly be impossible. Or maybe we’re in that universe already. If you are getting efficiency but not speed, you can always add parallelism. One form of parallelism is pipelining. We’re at like 20 pipeline stages nowadays, right? So in the ideal case if we weren’t able to parallelize in that dimension we’d be at something like 6Ghz/20=300Mhz. That’s pretty hand-wavey, but maybe it is a fun framing.

cbm-vic-20 8 hours ago [-]
My alternate reality "one of these days" projects is to have a RISC-V RV32E core on a small FPGA (or even emulated by a different SOC) that sits on a 40- or 64-pin DIP carrier board, ready to be plugged into a breadboard. You could create a Ben Eater-style small computer around this, with RAM, a UART, maybe something like the VERA board from the Commander X16...

It would probably need a decent memory controller, since it wouldn't be able to dedicate 32 pins for a data bus, loads and stores would need to be done wither 8 or 16 bits at a time, depending on how many pins you want to use for that..

erwan577 5 hours ago [-]
Have you thought about building a RISC-V “fantasy computer” core for the MiSTer FPGA platform? https://github.com/MiSTer-devel/Wiki_MiSTer/wiki

From a software-complexity standpoint, something like 64 MiB of RAM possibly even 32 MiB for a single-tasking system seems sufficient.

Projects such as PC/GEOS show that a full GUI OS written largely in assembly can live comfortably within just a few MiB: https://github.com/bluewaysw/pcgeos

At this point, re-targeting the stack to RISC-V is mostly an engineering effort rather than a research problem - small AI coding assistants could likely handle much of the porting work over a few months.

LeFantome 5 hours ago [-]
The really cool thing about RISC-V is that you can design your own core and get full access to a massive software ecosystem.

All you need is RV32I.

canpan 21 hours ago [-]
The GameBoy Advance could run 2D games (and some 3D demos) on 2 AA batteries for 16 hours. I wonder if we could get something more efficient with modern tech? It seems research made things faster but more power hungry. We compensate with better batteries instead. I guess we can and it's a design goal problem, I also do love a screen with backlight.
Aurornis 21 hours ago [-]
> It seems research made things faster but more power hungry

No, modern CPUs are far more power efficient for the same compute.

The primary power draw in a simple handheld console like would be the screen and sound.

Putting an equivalent MCU on a modern process into that console would make the CPU power consumption so low as to be negligible.

nxobject 17 hours ago [-]
As a consumer product example: e-ink readers. (Of course, it helps as well that the GameBoy had no radios etc...)
rbanffy 13 hours ago [-]
E-ink use energy when changing state. A 30fps 3D game would require a lot of energy. Also, e-ink is electromechanical in nature, so there would be a lot of wear as well.
strawhatguy 18 hours ago [-]
Yes; yet... I thought the efficiency per compute has to do more with the nm process shrinking the die than anything else. That and power use is divided by so many more instructions per second
rahkiin 1 days ago [-]
Given enough power and space efficiency you would start putting multiple cpus together for specialized tasks. Distributed computing could have looked differently
rbanffy 12 hours ago [-]
This is what the Mac effectively does now - background tasks run on low-power cores, keeping the fast ones free for the interactive tasks. More specialised ARM processors have 3 or more tiers, and often have cores with different ISAs (32 and 64 bit ones). Current PC architectures are already very distributed - your GPU, NIC/DPU, and NVMe SSD all run their own OSs internally, and most of the time don’t expose any programmability to the main OS. You could, for instance, offload filesystem logic or compression to the NVMe controller, freeing the main CPU from having to run it. Same could be done for a NIC - it could manage remote filesystem mounts and only expose a high-level file interface to the OS.

The downside would be we’d have to think about binary compatibility between different platforms from different vendors. Anyway, it’d be really interesting to see what we could do.

rbanffy 1 days ago [-]
This is more or less what we have now. Even a very pedestrian laptop has 8 cores. If 10 years ago you wanted to develop software for today’s laptop, you’d get a 32-gigabyte 8-core machine with a high-end GPU. And a very fast RAID system to get close to an NVMe drive.

Computers have been “fast enough” for a very long time now. I recently retired a Mac not because it was too slow but because the OS is no longer getting security patches. While their CPUs haven’t gotten twice as fast for single-threaded code every couple years, cores have become more numerous and extracting performance requires writing code that distributes functionality well across increasingly larger core pools.

LeFantome 5 hours ago [-]
Half my Linux machines are Macs “retired” for exactly this reason.
b112 1 days ago [-]
This was the Amiga. Custom coprpcessors for sound, video, etc.
rbanffy 1 days ago [-]
Commodore 64 and Ataris had intelligent peripherals. Commodore’s drive knew about the filesystem and could stream the contents of a file to the computer without the computer ever becoming aware of where the files were on the disk. They also could copy data from one disk to another without the computer being involved.

Mainframes are also like that - while a PDP-11 would be interrupted every time a user at a terminal pressed a key, IBM systems offloaded that to the terminals, that kept one or more screens in memory, and sent the data to another computer, a terminal controller, that would, then, and only then, disturb the all important mainframe with the mundane needs or its users.

kjs3 4 hours ago [-]
Ya...IBM and CDC both had/have architectures that heavily distributed tasks to subprocessors of various sorts. Pretty much dates to the invention of large-scale computers.

You also have things like the IBM Cell processor from PS3 days: a PowerPC 'main' processor with 7 "Synergistic Processing Elements" that could be offloaded to. The SPEs were kinda like the current idea of 'big/small processors' a la ARM, except SPEs are way dumber and much harder to program.

Of course, specialized math, cryptographic and compression processors have been around forever. And you can even look at something like SCSI, where virtually all of the intelligence for working the drive was offloaded to the drive ccontroller.

Lots of ways the implement this idea.

aa-jv 13 hours ago [-]
The alternative reality I wish we could move to, across the universe, is the one where SGI were the first to build a titanium laptop and became the worlds #1 Unix laptop vendor ..
rbanffy 13 hours ago [-]
I love the IRIX look, but they’d need to update it past the 1990s. It’d look very dated to current audiences.
aa-jv 12 hours ago [-]
NextStep looked pretty dated too, but it went through a nice evolution to bring it up to modern design standards .. if SGI had made that laptop and increased their marketshare I'm pretty sure Irix would've gotten a face-lift.

Anyway, its all about that alternative-universe, where the success of the SGI tiBook has everyone running Irix in their pockets ..

rbanffy 12 hours ago [-]
When NeXT acquired Apple (for one Steve Jobs, getting $400 million as change) OPENSTEP was not dated - it still looked impressive next to MacOS 9 and Windows. And CDE, of course, but that’s a very low bar.
aa-jv 11 hours ago [-]
It didn't look as great as Irix did back then, though ..
rbanffy 10 hours ago [-]
No, but it had text anti-aliasing. That looked pretty neat, the one thing I wish SGI had done.

That and powering their GUI to Linux.

voxelghost 21 hours ago [-]
Or if 640k was not only all you'd ever need, it was all we'd ever get.
LeFantome 5 hours ago [-]
Ya, but that means no high-res GUI. And pretty annoying limits on data set size.
kittbuilds 1 days ago [-]
There's something to this. The 200-400MHz era was roughly where hardware capability and software ambition were in balance — the OS did what you asked, no more.

What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free. An alternate timeline where CPUs got efficient but RAM stayed expensive would be fascinating — you'd probably see something like Plan 9's philosophy win out, with tiny focused processes communicating over clean interfaces instead of monolithic apps loading entire browser engines to show a chat window.

The irony is that embedded and mobile development partially lives in that world. The best iOS and Android apps feel exactly like your description — refined, responsive, deliberate. The constraint forces good design.

lelanthran 15 hours ago [-]
> What killed that balance wasn't raw speed, it was cheap RAM. Once you could throw gigabytes at a problem, the incentive to write tight code disappeared. Electron exists because memory is effectively free.

I dunno if it was cheap RAM or just developer convenience. In one of my recent comments on HN (https://news.ycombinator.com/item?id=46986999) I pointed out the performance difference in my 2001 desktop between a `ls` program written in Java at the time and the one that came with the distro.

Had processor speeds not increased at that time, Java would have been relegated to history, along with a lot of other languages that became mainstream and popular (Ruby, C#, Python)[1]. There was simply no way that companies would continue spending 6 - 8 times more on hardware for a specific workload.

C++ would have been the enterprise language solution (a new sort of hell!) and languages like Go (Native code with a GC) would have been created sooner.

In 1998-2005, computer speeds were increasing so fast there was no incentive to develop new languages. All you had to do was wait a few months for a program to run faster!

What we did was trade-off efficiency for developer velocity, and it was a good trade at the time. Since around 2010 performance increases have been dropping, and when faced with stagnant increases in hardware performance, new languages were created to address that (Rust, Zig, Go, Nim, etc).

-------------------------------

[1] It took two decades of constant work for those high-dev-velocity languages to reach some sort of acceptable performance. Some of them are still orders of magnitude slower.

cogman10 5 hours ago [-]
> Had processor speeds not increased at that time, Java would have been relegated to history, along with a lot of other languages that became mainstream and popular (Ruby, C#, Python)[1].

I'd go look at the start date for all these languages. Except for C#, which was a direct response to the Sun lawsuit, all these languages spawned in the early 90s.

Had processor speed and memory advanced slower, I don't think you see these languages go away, I see they just end up being used for different things or in different ways.

JavaOS, in particular, probably would have had more success. Seeing an entire OS written in and for a language with a garbage collector to make sure memory isn't wasted would have been much more appealing.

lelanthran 4 hours ago [-]
> I'd go look at the start date for all these languages. Except for C#, which was a direct response to the Sun lawsuit, all these languages spawned in the early 90s.

I don't understand your point here - I did not say those languages came only after 2000, I said they would have been relegated to history if they didn't become usable due to hardware increases.

Remember that Java was not designed as a enterprise/server language. Sun pivoted when it failed at its original task (set top boxes). It was only able to pivot due to hardware performance increases.

cogman10 16 minutes ago [-]
> I said they would have been relegated to history if they didn't become usable due to hardware increases.

And I disagree with this assessment. These languages became popular before they were fast or the hardware support was mature. They may have taken different evolution routes, but they still found themselves useful.

Python, for example, entered in a world where perl was being used for one off scripts in the shell. Python replacing perl would have still happened because the performance characteristics of it (and what perl replaced, bash scripts) is similar. We may not have used python or ruby as web backends because they were too slow for that purpose. That, however, doesn't mean we wouldn't have used them for all sorts of other tasks including data processing.

> Remember that Java was not designed as a enterprise/server language. Sun pivoted when it failed at its original task (set top boxes). It was only able to pivot due to hardware performance increases.

Right, but the java of old was extremely slow compared to today's Java. The JVM for Java 1 to 1.4 was dogshit. It wasn't hardware that made it fast.

Yet still, java was pretty popular even without a fast JVM and JIT. Hotspot would have still likely happened but maybe the GC would have evolved differently as the current crop of GC algorithms trade memory for performance. In a constrained environment Java may have never adopted moving collectors and instead relied on Go like collection strategies.

Java applets were a thing in the 90s even though hardware was slow and memory constrained. That's because the JVM was simply a different beast in that era. One better suited to the hardware at the time.

Even today, Java runs on hardware that is roughly 80s quality (see Java Card). It's deployed on very limited hardware.

What you are mistaking is the modern JVM's performance characteristics for Java's requirements for running. The JVM evolved with hardware and made tradeoffs appropriate for Java's usage and hardware capabilities.

I remember the early era of the internet. I ran Java applets in my netscape and IE browsers on a computer with 32MB of ram and a 233MHz processor. It was fine.

LeFantome 5 hours ago [-]
As you say, the trade-off is developer productivity vs resources.

If resources are limited, that changes the calculus. But it can still make sense to spend a lot on hardware instead of development.

nxobject 23 hours ago [-]
Lots of good practices! I remember how aggressively iPhoneOS would kill your application when you got close to being out of physical memory, or how you had to quickly serialize state when the user switched apps (no background execution, after all!) And, or better or for worse, it was native code because you couldn’t and still can’t get a “good enough” JITing language.
lucaspiller 19 hours ago [-]
I'm in the early phases of working on a game that explores that.

The backstory is that in the late 2050s when AI has its hands in everything, humans loose trust of it. There are a few high profile incidents - based on AI decisions -, which cause public opinion to change, and an initiative is brought in to ensure important systems run hardware and software that can be trusted and human reviewed.

A 16bit CPU architecture - with no pipelining, speculative execution etc is chosen, as it's powerful enough to run such systems, but also simple enough that a human can fully understand the hardware and software.

The goal is to make a near-future space exploration MMO. My Macbook Pro can simulate 3000 CPU cores simultaneously, and I have a lot of fun ideas for it. The irony is that I'm using LLMs to build it :D

b112 1 days ago [-]
We had web browsers, kinda, in that we'd call up BBSes, and use ansi for menus and such.

My Vic20 could do this, and a C64 easily, really it was just graphics that were wanting.

I was sending electronic messages around the world via FidoNet and PunterNet, downloaded software, was on forums, and that all on BBSes.

When I think of the web of old, it's the actual information I love.

And a terminal connected to a bbs could be thought of as a text browser, really.

I even connectd to CompuServe in the early 80s via my C64 through "datapac", a dial gateway via telnet.

ANSI was a standard too, it could have evolved further.

galangalalgol 22 hours ago [-]
Heavy webpages are the main barrier for projects like this. We need something that is just reader view for everything without the overhead of also being able to do non reader view. Like w3m or lynx but with sane formatting, word wrap etc.
rainingmonkey 7 hours ago [-]
You might be interested in the Gemini protocol: https://geminiprotocol.net/
galangalalgol 7 hours ago [-]
I am interested, thank you! I still think there is room to make a text browser that allows as many webpages as possible to be usable without requiring MB of ram per page. You can do things piping curl through pandoc but that isn't terribly useful.
kevin_thibedeau 1 days ago [-]
> graphics that were wanting

Prodigy established a (limited) graphical online service in 1988.

antidamage 23 hours ago [-]
I'm imagining that this is the work of the Boy Genuis
lich_king 20 hours ago [-]
I think the boring answer is that we waste computing resources simply because if memory and CPU cycles are abundant and cheap, developers don't find it worth their time to optimize nearly as much as they needed to optimize in the 1980s or 1990s.

Had we stopped with 1990s tech, I don't think that things would have been fundamentally different. 1980s would have been more painful, mostly because limited memory just did not allow for particularly sophisticated graphics. So, we'd be stuck with 16-color aesthetics and you probably wouldn't be watching movies or editing photos on your computer. That would mean a blow to social media and e-commerce too.

cogman10 5 hours ago [-]
I don't actually blame dev laziness for the lack of optimization.

There is certainly a level of "good enough" that's come in, but a lot of that comes not from devs but from management.

But I'll say that part of what has changed how devs program is what's fast and slow has changed from the 90s to today.

In the early 90s, you could basically load or store something into memory in 1 or 2 CPU cycles. That meant that datastructures like a linked list were a more ideal than datastructures like an array backed list. There was no locality impact and adding/removing items was faster.

The difference in hard drive performance was also notable. One wasteful optimization that started in the late 90s was duplicating assets to make sure they were physically colocated with other data loading. That's because the slowest memory to load in old systems came from the hard drive.

Now with SSDs, disk loading can literally be nearly as fast as interactions with GPU memory. Slower than main memory, but not by much. And because SSDs don't suffer as much from random access, it means how you structure data on disk can be wildly different. For example, for spinning disks a b-tree structure is ideal because it reduces the amount of random accesses across the disk. However, for an SSD, a hash datastructure is generally better.

But also, the increase of memory has made tradeoffs a lot more worth it. At one point, the best thing you could do is sort your memory in some way (perhaps a tree structure) so that searching for items is faster. That is in-fact built into C++'s `map`. But now, a hash map will eat a bit more memory, but the O(1) lookup is much more ideal in general for storing lookups.

Even when we talk about the way memory allocation works, we see that different tradeoffs have been made than would be without a lot of extra memory.

State of the art allocators use multiple arenas and memory allocators to allow for multithreaded applications to allocate as fast as possible. That does mean you end up with wasted memory, but you can allocate much faster than you could in days of old. Without that extra memory headroom, you end up with slower allocation algorithms because wasting any space would be devastating. Burning the extra CPU cycles to find a location for allocation ends up being the right trade off.

alexisread 1 days ago [-]
Apart from transputers mentioned already, there’s https://greenarrays.com/home/documents/g144apps.php

Both the hardware and the forth software.

APIs in a B2B style would likely be much more prevalent, less advertising (yay!) and less money in the internet so more like the original internet I guess.

GUIs like https://en.wikipedia.org/wiki/SymbOS

And https://en.wikipedia.org/wiki/Newton_OS

Show that we could have had quality desktops and mobile devices

PostOnce 23 hours ago [-]
I want to chime in on SymbOS, which I think is the perfect reply to the GP's curiosity.

https://www.symbos.org/shots.htm

This is what slow computers with a few hundred kB of RAM can do.

nxobject 17 hours ago [-]
The original Macintosh had similar specs as well – 128k with a 68k clocked at ~6-7 MHz. It helps that both platforms put a significant amount of OS code in ROM.
bluGill 1 days ago [-]
I remember using the web on 25mhz computers. It ran about as fast as it does today with a couple ghz. Our internet was a lot slower than as well.
Aurornis 1 days ago [-]
> I remember using the web on 25mhz computers. It ran about as fast as it does today with a couple ghz.

I know it’s a meme on HN to complain that modern websites are slow, but this is a perfect example of how completely distorted views of the past can get.

No, browsing the web in the early 90s was slooow. Even simple web pages took a long time to load. As you said, internet connections were very slow too. I remember visiting pages with photos that would come with a warning about the size of the page, at which point I’d get up and go get a drink or take a break while it loaded. Then scrolling pages with images would feel like the computer was working hard.

It’s silly to claim that 90s web browsers ran about as fast as they do today.

lelanthran 15 hours ago [-]
> No, browsing the web in the early 90s was slooow. Even simple web pages took a long time to load. As you said, internet connections were very slow too. I remember visiting pages with photos that would come with a warning about the size of the page, at which point I’d get up and go get a drink or take a break while it loaded.

At home, when I was on dialup, certainly.

At work I did not experience this. Most pages loaded in Netscape navigator in about the same time that most pages load now - a few seconds.

> Then scrolling pages with images would feel like the computer was working hard.

Well, yes, single-core, single-socket and single-processor meant that the computer could literally only do a single thing at a time, and yet the scrolling experience on most sites was still good enough to be better than the scrolling experience on some current React sites.

Tor3 1 days ago [-]
Browsing the web was slow, because the network was slow. It wasn't really because the desktop computers were slow. I remember our company having just a 64 kbit/s connection to the 'net, even as late as in 1997.. well, that was pretty good compared to the place where I was contracted to at the time, in Italy.. they had 19.2 kbit/s. Really big sites could have something much better, and browsing the internet at their sites was a different experience then, using the same computers.
LeFantome 5 hours ago [-]
Ya, what is crazy is that we were “serving” web pages over those kinds of lines.
nebula8804 1 days ago [-]
This is probably me experiencing a simulacra but with that slow loading getting up to go get a drink workflow, each page load was more special. It was magical discovering new websites just like trying out new software by picking something up from those "pegboards" at computer stores.

It also was a simpler time, the technology was in peoples lives but as a small side quest to their main lives. It took the form of a bulky desktop in the den or something like that. When you walked away from that beige box, it didn't follow or know about the rest of your life.

A life where a Big Mac meal was only $2.99, a toyota corolla was $9-15k, houses were ~100k, and when average dev salaries were ~50k. That was a different life. I don't know why but I picture this music video that was included on the Windows 95 cd bonus folder when I think of this simulacra: https://www.youtube.com/watch?v=iqL1BLzn3qc

vardump 21 hours ago [-]
> music video that was included on the Windows 95 cd bonus folder when I think of this simulacra: https://www.youtube.com/watch?v=iqL1BLzn3qc

When I saw that video in 1995, I understood something we now know as Youtube would be inevitable as the connection speeds improve. Although I thought it'd be like MTV, a way to watch the newest music videos.

ok_dad 1 days ago [-]
No, I think he’s right. I don’t recall the web being any faster today than it was thirty years ago, download speed excepted. The overall experience is about the same, if not worse, IMO.
pibaker 24 hours ago [-]
Why would you make an exception for download speed? It was the reason why the internet was slow back then.

This is like saying Victorian Britain wasn't polluted, except for all the coal burning.

ok_dad 15 hours ago [-]
Cars have much more power today but generally don’t go much faster because they’re much heavier. Just because downloads are faster doesn’t mean the user experience is faster or more snappy. In fact, it might be worse. Quality doesn’t follow from quantity.
antidamage 23 hours ago [-]
It's not an accurate recollection at all. In 1990 a couple of us 12 year olds snuck into the university library to use the web to look at the Marathon website. It took 5 minutes to load some trivially-sized gifs and a tiny amount of HTML. They had a pretty decent connection for the day.

Web pages took a minute to load, now we're optimising them for instant response.

ghssds 17 hours ago [-]
That's really cool you visited in 1990, three years before the first graphical web browser, a website with image about a game released in 1994.
bluGill 23 hours ago [-]
My clim is that the modern web is bloated.

I had t3 connections for most of my browsing which was faster than ethernet of the day - even by todays standards that isn't too bad. I avoided dialup if I could because it was slow. Even isdn was okay speeds.

Aurornis 23 hours ago [-]
> My clim is that the modern web is bloated.

Your claim that I responded to was that web browsers were just as fast on 25MHz CPUs.

> I had t3 connections for most of my browsing which was faster than ethernet of the day - even by todays standards that isn't too bad.

T3 speeds are very slow in today's terms. Even my cell phone does a couple orders of magnitude better from where I'm sitting.

There are a lot of weird claims going on in your posts. I think it's a lot of nostalgia coloring your views of how fast things were in the past.

bluGill 23 hours ago [-]
the modern web is very bloted and to the actual experinece isn't much different. Of course some of that bloat does more, but much of it isn't
pibaker 23 hours ago [-]
If you want to complain about the state of modern web, you can just do it. You don't need to spin up a story about how the old internet was faster than it actually was.

This is the same pattern you see in politics when people on all sides (even the nominally progressive ones) lie to each other about how great the olden days were, when in reality it's all about their dissatisfaction with the present day.

bluGill 9 hours ago [-]
The subject here is what if computers hadn't advanced from the 1990s. Complaining about modern web bloat would be off topic. Commenting about how things weren't much slower fits the topic though.
fragmede 23 hours ago [-]
Wirths law in effect.
raverbashing 1 days ago [-]
Yeah slow?

Try using a 2400baud modem, that was slow

bluGill 23 hours ago [-]
I started on 300baud - but never accessed the internet from that so I won't count it in this discussion.
vardump 16 hours ago [-]
Those things always confuse me. I think 2400 baud modems were like 9600 bps? At least 56k modems were 8000 baud.
raverbashing 16 hours ago [-]
no, the other way

2400 were 300 bytes per second

(though it might be that 9600bps worked at the 'official definition' of 2400 baud but nobody advertised it like that)

vardump 15 hours ago [-]
You seem to be confusing baud and bits per second. Baud is symbols per second. Usually one symbol represents multiple bits. AFAIK in 56k modems one symbol corresponds up to 7 bits.
raverbashing 13 hours ago [-]
I am not confusing anything, it was the marketing that was confusing

Of course you're correct and a 56k modem is something like 8k Baud, but in marketing the bigger number usually wins

And up until the 2400bps modems IIRC bauds and bps were interchangeable

exe34 1 days ago [-]
what a glorious time that was! now it's too easy to get stuck looking at the stream of (usually AI generated) crap. I long for the time when the regular screen break was built-in.
peterfirefly 1 days ago [-]
It crashed a lot more, the fonts (and screens) were uglier, and Javascript was a lot slower. The good thing was that there was very little Javascript.
nxobject 23 hours ago [-]
> The good thing was that there was very little Javascript.

Because all of the complicated client side stuff was in Java applets or Shockwave :( Pepperidge Farm remembers having to wait 10 minutes for a GameBoy emulator to load to play Pokémon Yellow on school computers…

szundi 1 days ago [-]
[dead]
graemep 1 days ago [-]
I cannot recall crashes being a problem.
II2II 23 hours ago [-]
I remember Netscape Navigator crashing, taking Solaris down with it. I could only imagine what it was like on Windows 9x. I don't want to imagine what Windows 3.x users endured. Windows 3.x was the OS where people saved early and saved often, since the lack of proper memory protection meant that a bad application (or worse, a bad driver) could BSOD the system at any time.
ksherlock 23 hours ago [-]
I once did an April Fool's spoof of netscape that displayed a wait cursor for 2 minutes then a bomb alert. For classic Mac, it was 90% accurate with only 1% the disk footprint.
tom_ 20 hours ago [-]
With Windows 9x, I recall the crashes being manageable, but it was advisable to give the system 15 minutes to settle down after rebooting. Windows would start multiple things at once on startup and it was a bit risky to overstress it.

Windows NT 4 seemed OK, but a lot of software didn't run.

By the time of Windows 2000 the tradeoff was much better.

(Allowing a settle down time remained a good idea, in my experience. Even if Windows 2000 and later were very unlikely to actually crash, the response time would still be dogshit until everything had been given time to settle into a steady state. This problem didn't get properly fixed until pervasive use of SSDs - some time between Windows 7 and Windows 8, maybe? - and even then the fix was just that there was no longer any pressing need to actually fix it.)

t-3 1 days ago [-]
I remember using the web in the 90s. I often left to make a sandwich while pages loaded.
rbanffy 1 days ago [-]
Try opening Gmail on one of those. Won’t be fun.
jxdxbx 5 hours ago [-]
I like the 32 bit era because it's the only time we had symmetrical data and address buses. 32 bit data bus on chip. 32 bit addressable RAM, 4 GB. Used to be 8/16 and now it's 64/whatever. Understandable decisions given needs/constraints but sort of messy.
grvbck 22 hours ago [-]
> One thing I do know for sure. LLMs would have been impossible.

We had ELIZA, and that was enough for people to anthropomorphize their teletype terminals.

tonyedgecombe 1 days ago [-]
I always think the Core 2 Duo was the inflexion point for me. Before that current software always seemed to struggle on current hardware but after it was generally fine.

As much as I like my Apple Silicon Mac I could do everything I need to on 2008 hardware.

vardump 20 hours ago [-]
It's remarkable how a modern $50 SBC outperforms the old Core 2 Duo line.
keyringlight 1 days ago [-]
Alongside the power of a single core, that was alongside adoption of multicore and moving from 32 to 64 bit for the general user, which enabled greater than 4GB memory and lots of processes to co-exist more gracefully.
wolvoleo 10 hours ago [-]
No, if we had the web it would be more like what gopher was. Or maybe lynx.

Edit: oh I thought you meant if we were stuck in 6502 style stuff. With megabytes of ram we'd be able to do a lot more. When I was studying we ran 20 X terminals with ncsa mosaic on a server with a few CPUs and 128GB RAM or so. Graphic browsing would be fine.

Only when Java and JavaScript came on the scene things got unbearably slow. I guess in that scenario most processing would have stayed server-side.

komodo99 6 hours ago [-]
Of course it was fast with that much ram, you'd just cache the entire interwebs. /s
Someone 12 hours ago [-]
> Would we still get web browsers?

https://en.wikipedia.org/wiki/PLATO_(computer_system) is from the 1960s, so, technically, it certainly is possible. Whether it would make sense commercially to support a billion users would depend on whether we would stay stuck on prices of the eighties, too.

Also, there’s mobile usage. I would it be possible to build a mobile network with thousands of users per km² with tech from the eighties?

JdeBP 1 days ago [-]
Transputers. Lots and lots and lots of transputers. (-:
dpe82 1 days ago [-]
rbanffy 1 days ago [-]
Lots and lots of red LEDs. Such an iconic machine! I miss computers that look good.

BTW, IBM has been doing a fine design job with their quantum computers - they aren’t quite the revolution we were promised, but they do look the part.

kaashif 1 days ago [-]
I don't think there's really a credible alternate reality where Moore's law just stops like that when it was in full swing.

The ones that "could have happened" IMO are the transistor never being invented, or even mechanical computers becoming much more popular much earlier (there's a book about this alternate reality, The Difference Engine).

I don't think transistors being invented was that certain to happen, we could've got better vacuum tubes, or maybe something else.

jhbadger 1 days ago [-]
As someone has brought up, Transputers (an early parallel architecture) was a thing in the 1980s because people thought CPU speed was reaching a plateau. They were kind of right (which is why modern CPUs are multicore) but were a decade or so too early so transputers failed in the market.
rbanffy 1 days ago [-]
CPU cores are still getting faster, but not at the 1980/90s cadence. We get away with that because the cores have been good enough for a decade - unless you are doing heavy data crunching, the cores will spend most of the time waiting for you to do something. I sometimes produce video and the only times I hear the fans turning on is when I am encoding content. And even then, as long as ffmpeg runs with `nice -n 19`, I continue working normally as if I had the computer all to myself.
bilegeek 21 hours ago [-]
If you're on Linux, I'd highly recommend trying `chrt -i 0`. Not quite night-and-day compared to nice 19, but anecdotally it is noticeable, especially if you game.
vardump 1 days ago [-]
When MC68030 (1986) was introduced, I remember reading how computers probably won't get much faster, because PCB signal integrity would not allow further improvements.

People that time were not actually sure how long the improvements would go on.

jecel 1 days ago [-]
We were stuck with 33MHz PCBs for a long time as people kept trying and failing to get 50MHz PCBs to work. Then Intel came out with the 486DX2 which allowed you to run a 50MHz processor with an external 25MHz bus (so a 25MHz PCB) and we started moving forward again, though we did eventually get PCBs to go much faster as well.

The Transputers (mentioned in other comments) had already decoupled the core speed from the bus speed and Chuck Moore got a patent for doing this in his second Forth processor[1], which patent trolls later used to extract money from Intel and others (a little of which went to Chuck and allowed him to design a few more generations of Forth processors).

[1] https://en.wikipedia.org/wiki/Ignite_(microprocessor)

rbanffy 1 days ago [-]
> We were stuck with 33MHz PCBs for a long time as people kept trying and failing to get 50MHz PCBs to work.

What is the current best symbol rates we get on PCB traces? I know we’ve been multiplexing a lot of channels using the same tricks we used with modems to get above 9600bps on POTS.

nxobject 20 hours ago [-]
How conservative of a lower bound would next-gen PCI-e give?
PetahNZ 1 days ago [-]
We did have web browsers, I had Internet Explorer on Windows 3.1, 33mhz 8mb RAM.
phwbikm 1 days ago [-]
I still remember the Mosaic from NCSA. Internet in a box.
drzaiusx11 1 days ago [-]
Probably was "Windows 3.11, For Workgroups" as iirc Windows 3.1 didn't ship with a TCP/IP stack
dpe82 1 days ago [-]
There was a sockets API though (https://en.wikipedia.org/wiki/Winsock) and IIRC we all used Trumpet Winsock on Windows 3.1 with our dialup connections. But could have been 3.11 - my memory is a bit hazy.
rbanffy 1 days ago [-]
3.11 was so much nicer than 3.1 (and 3.0) I can’t imagine not moving to it as soon as possible.
aleph_minus_one 24 hours ago [-]
Windows for Workgroups 3.11 did not contain Cardfile. :-(
rbanffy 23 hours ago [-]
Didn’t it have a proper address book? I remember I could send faxes through Mail.
aleph_minus_one 23 hours ago [-]
> Didn’t it have a proper address book?

Schedule+, which was contained in Windows for Workgroups 3.11, contained address book functionalities that were clearly better than Cardfile.

But people used Cardfile for many other different purposes than serving as an address book.

dpe82 24 hours ago [-]
I was like 11 at the time buying computer stuff with lawn care earnings so I used whatever I could get my hands on. :)
tosapple 1 days ago [-]
[dead]
antidamage 1 days ago [-]
Teletext existed in the 80s and was widely in use, so we'd have some kind of information network.

BBSes existed at the same time and if you were into BBSes you were obsessive about it.

eqvinox 18 hours ago [-]
You'd probably get much more multiprocessor stuff much earlier. There's probably 2 or 3 really good interfaces to wire an almost arbitrary number of CPUs together and run some software across all of them (AMP not SMP).
yoyohello13 1 days ago [-]
This is basically the premise of the Fallout universe. I think in the story it was the transistor was never invented though.
myself248 1 days ago [-]
And imagine if telecom had topped out around ISDN somewhere, with perhaps OC-3 (155Mbps) for the bleeding-fastest network core links.

We'd probably get MP3 but not video to any great or compelling degree. Mostly-text web, perhaps more gopher-like. Client-side stuff would have to be very compact, I wonder if NAPLPS would've taken off.

Screen reader software would probably love that timeline.

iberator 1 days ago [-]
you are wrong. Windows 3.11 era used CPUs with like 33mhz cpu, and yet we had TONS of graphical applications. Including web browsers, Photoshop, CAD, Excel and instant messangers

Only thing that killed web for old computers is JAVASCRIPT.

vidarh 1 days ago [-]
I don't see how this contradicts any of what they said, unless they've edited their comment.

You're right we had graphical apps, but we did also have very little video. CuSeeMe existed - video conferencing would've still been a thing, but with limited resolution due to bandwidth constraints. Video in general was an awful low res mess and would have remained so if most people were limited to ISDN speeds.

While there were still images on the web, the amount of graphical flourishes were still heavily bandwidth limited.

The bandwidth limit they proposed would be a big deal even if CPU speeds continued to increase (it could only mitigate so much with better compression).

iberator 13 hours ago [-]
i remember watching adult videos on windows 3.11 with 486dx 100mhz. CD-ROM VIDEO thing. I guess it was just just mpeg2 format
rbanffy 1 days ago [-]
> Only thing that killed web for old computers is JAVASCRIPT.

JavaScript is innocent. The people writing humongous apps with it are the ones to blame. And memory footprint. A 16 MB machine wouldn’t be able to hold the icons an average web app uses today.

bitwize 24 hours ago [-]
Netscape was talking about making the Web an app platform to replace Microsoft Windows even way back then. The world we're living in today is exactly what they envisioned.
rbanffy 23 hours ago [-]
Electron wouldn’t be possible back then.
bitwize 14 hours ago [-]
Electron was effectively invented by Microsoft in 1999:

https://en.wikipedia.org/wiki/HTML_Application

Which is funny, because HTML, Java, and JavaScript were being talked about as an app platform a few years before then, precisely to prevent Microsoft from drinking everybody's milkshake on the desktop.

cluckindan 1 days ago [-]
Not JavaScript. Facebook.
j16sdiz 1 days ago [-]
Netscape 2 support javascript on 16-bit Windows 3.1
phwbikm 1 days ago [-]
I have a Hayes 9600kbps modem for web surfing.
rbanffy 1 days ago [-]
“Web surfing” sounds so much healthier than “doom scrolling”…
rm30 1 days ago [-]
I remember when I went from 286 to 486dx2, the difference was impressive, able to run a lot of graphical applications smoothly.

Ironically, now I'm using an ESP32-S3, 10x more powerful, just to run Iot devices.

petra 1 days ago [-]
It's probably possible to develop analog adsl chips in 1990 semi tech. But pretty difficult.
drob518 1 days ago [-]
Depends how pervasive OC3 would have gotten. A 1080p video stream is only about 7 Mbps today.
fhars 1 days ago [-]
You only have to bundle about 110 ISDN channels to transfer that (four E1 or five T1 trunk lines).
myself248 12 hours ago [-]
Right, but point is, assume the "backbone" never got fast enough to have a million subscribers all doing that at once.

I remember a subscriber T1 costing 4 figures per month, and I don't think it's because the copper pairs themselves were any different. (They weren't. As long as they didn't have bridge-taps, it was just plain old pairs. The repeaters every few kilofeet were not that expensive either.)

I remember the early-90s internet guidance that idle traffic like keepalive pings was discouraged, especially if you were sending traffic overseas, because it cluttered up the backbone links with packets that weren't actually valuable, and that was rude / abusive. Presumably edge CDNs would've still happened (or, ISPs providing Usenet servers basically did a lot of that already), but you simply wouldn't be doing video over the internet at large because the bandwidth charges would kill you.

drob518 7 hours ago [-]
You would still have video happening, but it would not be the type we have today (streaming arbitrary full-length movies from a nearly infinite catalog and YouTube). It would be used for big events and things like that. We might still have gotten podcasting, though.
drob518 7 hours ago [-]
Right, but a T3 could have handled multiple.
nicksergeant 22 hours ago [-]
You should definitely watch Maniac: https://en.wikipedia.org/wiki/Maniac_(miniseries)
anticodon 4 hours ago [-]
> Would we still get web browsers?

There was Lynx text browser that was ported even to MS-DOS. I was using it until about 2010. It was a great browser until websites become unusable.

vidarh 1 days ago [-]
There are web browsers for 8-bits today, and there were web browsers for e.g. Amiga's with 68000 CPU's from 1979 back in the day.
romperstomper 1 days ago [-]
> One thing I do know for sure. LLMs would have been impossible.

Maybe they could, as ASICs in some laboratories :)

nxobject 23 hours ago [-]
Honestly, I think we could’ve pulled off a lot earlier if GPU development had invested in GPGPU earlier.

I can see it now… the a national lab can run ImageNet, but it takes so many nodes with unobtanium 3dfx stuff that you have to wait 24 hours for a run to be scheduled and completed.

zi2zi-jit 1 days ago [-]
tbh we'd probably just have really good Forth programmers instead of LLMs. same vibe, fewer parameters.
Gibbon1 13 hours ago [-]
I was doing Schematic Capture and Layout on a 486 with <counts voice> one two three four five six seven eight 8 megabytes of RAM ah haha.
15 hours ago [-]
DeathArrow 16 hours ago [-]
>I sometimes wonder what the alternate reality where semiconductor advances ended in the eighties would look like.

We would have seen much less desktop apps being written using Javascript frameworks.

dheera 1 days ago [-]
> Would we still get web browsers?

Yes, just that they would not run millions of lines of JavaScript for some social media tracking algorithm, newsletter signup, GDPR popup, newsletter popup, ad popup, etc. and you'd probably just be presented with the text only and at best a relevant static image or two. The web would be a place to get long-form information, sort of a massive e-book, not a battleground of corporations clamoring for 5 seconds of attention to make $0.05 off each of 500 million people's doom scrolling while on the toilet.

Web browsers existed back then, the web in the days of NCSA Mosaic was basically exactly the above

Aurornis 1 days ago [-]
The whitewashing of the past in this thread is something else.

Did everyone forget the era of web browsing when pages were filled with distracting animated banner ads?

The period when it was common for malicious ads to just hijack the session and take you to a different page?

The pop-up tornados where a page would spawn pop ups faster than you could close them? Pop unders getting left behind to discover when you closed your window?

Heavy flash ads causing your browser to slow to a crawl?

The modern web browsing experience without an ad blocker feels tame compared to the early days of Internet ads.

dxdm 1 days ago [-]
What you describe sounds like the late nineties to me, not what we had with the technology of (at most) 1990. There are orders of magnitude between available performance and memory on both ends of this decade.
dheera 17 hours ago [-]
Most of what you describe were the late 1990s/early 2000s, not the days of NCSA Mosaic.

Also

> distracting animated banner ads

These were characteristic of the late 90s, but were way easier to block back then. Just put "0.0.0.0 ad.doubleclick.net" in your /etc/hosts and 99% of them went away. Or send them to 127.0.0.1 and serve a white single pixel GIF with Apache to avoid web browsers hanging on the request.

The popups of the late 90s were easy to get rid of too, all you had to do was disable JS. There were almost zero websites that did anything good with JS back then.

Flash crap? Don't install it if you don't want it.

intrasight 1 days ago [-]
Well, we wouldn't have ads and tracking.
vidarh 1 days ago [-]
Prodigy launched online ads from the 1980s. AOL as well.

HotWired (Wired's first online venture) sold their first banner ads in 1994.

DoubleClick was founded in 1995.

Neither were limited to 90's hardware:

Web browsers were available for machines like the Amiga, launched in 1985, and today you can find people who have made simple browsers run on 8-bit home computers like the C64.

DerekL 17 hours ago [-]
I actually used an Amiga to browse the web, back in 1994 or 1995. I started with Lynx, but then I switched to a graphical browser (probably Mosaic).

This was an Amiga 500 with maxed-out RAM (8 MB) and a hard drive.

peterfirefly 1 days ago [-]
If such an alternate reality has internet of any speed above "turtle in a mobility scooter" then there for sure would be ads and tracking.
p_ing 1 days ago [-]
The young WWW had garish flashing banner ads.
kevin_thibedeau 1 days ago [-]
The young web had no ads at all.
vidarh 11 hours ago [-]
For a very narrow definition of "the young web".

The period without ads lasted ~4 years, during which almost nobody used the web, and even fewer used a graphical browser.

Mosaic 0.5 was first released in January '93 (yes, there were other graphical browsers preceding it, like Viola, but none saw broad distribution)

Netscape was first launched in October '94.

By '94 the first banner ads existed.

DoubleClick started selling banner ads in '95.

p_ing 20 hours ago [-]
You're misremembering the young web.
user3939382 1 days ago [-]
Actually real AI isn’t going to be possible unless we return to this arch. Contemporary stacks are wasting 80% of their energy which we now need for AI. Graphics and videos are not a key or necessary part of most computing workflows.
billygoat 9 hours ago [-]
Wait, there is an 800x480 display connected, but the thing only has 46k of RAM. There's no explanation of the display approach being used.

The extended graphics commands seem to allow X/Y positioning with an 8-bit color.

I think the picture shows an 80x25 screen?

What gives here? Anyone know what's going on?

alnwlsn 9 hours ago [-]
The display controller they are using (RA8875 or RA8889) has several hundred KB of internal memory. So you can write to the screen and the image will "stay there" as it were, you don't have to store a framebuffer or keep writing out the image like with a CRT.
jensgk 9 hours ago [-]
It probably has a character mapped display, so you can only display 256 different (ascii and graphics) characters in a memory mapped 80*25 = 2000 bytes display buffer.

EDIT: I can now see that is does have bit mapped graphics. It must have a built-in serial like terminal with graphics capabilities.

EDIT2: Probably using this chip: https://www.adafruit.com/product/1590

:-)

deckar01 1 days ago [-]
3D printer beds have been getting bigger, but slicers don’t seem to account for curling as large prints cool. The problem is long linear runs on bottom infill and perimeters shrinking. I’ve been cutting my large parts into puzzle like shapes, but printing them fully assembled. This adds curved perimeters throughout the bottom layer, reducing the distance stress can travel before finding a seam to deform.

That said, a retro laptop this thick would look really nice in stained wood.

ezulabs 5 hours ago [-]
I stop seeing those issues after starting using Bambulab X1 3d printer, it just works 99 percent of the time. My old ender 3 v2 had those issues due to bed changing its shape during printing(thin metal changing shape in micro level due to temp and mechanical stress with 4 knobs pulling down), not sticking well, not enough fast cooling or cooling effects bed temp profile, way off even with auto level probes. Also open 3d printer with bunch of air flow is a killer for those prints. OP should try Bambulab like enclosed more accurate 3d printer from a local 3d print shop or a hackerspace after locking the design and enclosure iterations.
lawn 4 hours ago [-]
With proper bed leveling, meshing, chamber temperature, ears/brim, and glue this shouldn't be a big issue.

What printer are you using?

deckar01 1 hours ago [-]
Prusa XL with higher temp filament, not enclosed. I was making parts that spanned corner to corner. It works fine once I prevent it from making 400mm linear runs.
rustyhancock 1 days ago [-]
Stunning work! Astounding progress since its under 3 months old from PCB to this result.

Funnily enough I've been musing this past month would I better separate work if I had a limited Amiga A1200 PC for anything other than work! This would nicely fit.

Please do submit to HackaDay I'm sure they'd salivate over this and it's amazing when you have the creator in the comments. Even if just to explain no a 555 wouldn't quite achieve the same result. No not even a 556...

guidoism 1 days ago [-]
> Yes, I know I'm crazy, but

Any time I see this phrase I know these are my people.

readme 1 days ago [-]
Crazy for wanting a computer that's actually yours.

I believe there will come a day where people who can do this will be selling these on the black market for top dollar.

jrmg 8 hours ago [-]
I love this.

It always mildly tickles me when retrocomputer designs use anachronistic processors way more powerful than the CPU in their design - in this case, there’s a ATmega644 as a keyboard controller (64K ROM - although only 4K RAM, up to 20MHz) and presumably something pretty powerful in the display board.

ekaryotic 1 days ago [-]
neat. not something i´d hanker for. i saw a 16 core z80 laptop years ago and i often think about it because it can multitask. https://hackaday.com/2019/12/10/laptop-like-its-1979-with-a-...
nine_k 1 days ago [-]
I implemented "multitasking" (well, two-tasking) between a BASIC program and native code on a Z80, using a "supervisor" driven by hardware interrupts. There's just so much you can pack in a 4MHz CPU with a 4-bit ALU (yes, not 8-bit). It worked for soft-realtime tasks, but would be a rather weak desktop.
shrubble 22 hours ago [-]
The follow on to CPM which ran on the Z80 is MPM which is a multitasking OS.
ted_dunning 1 days ago [-]
I love the super clunky retro esthetic!

Takes me back to a time when a laptop would encourage the cat to share a couch because of the amount of heat it emitted.

Amazingly quick as well. Pointless projects are so much better and more fun when they don't take forever!

Western0 5 hours ago [-]
This dev can working more than a week on one AA battery?
marcodiego 1 days ago [-]
Maybe this can achieve RYF certification.

What I really would love: modern (continously built) modern (less than 10 years old tech) devices ryf-cetified.

flopsamjetsam 1 days ago [-]
I love the case material. What is it? It looks like what they make the bulk post boxes out of here (if you ship a lot of material via post, they give you these boxes to put them in to/from the delivery centre), or corflute material (election candidates posters around here).
speedgoose 16 hours ago [-]
Looks like 3D printed PLA.
drob518 1 days ago [-]
Brilliant! I love it. Bonus points for using the eWoz monitor. It’s giving me the itch to build it.
louismerlin 1 days ago [-]
Awesome! Gives me mnt pocket reform vibes.

https://shop.mntre.com/products/mnt-pocket-reform

wakest 1 days ago [-]
lol hi merlin, was peeking in the comments wondering if anyone would say this
userbinator 24 hours ago [-]
I wonder how long the battery lasts. The LCD backlight probably draws more power than the CPU (<0.1W, even with no special low-power idle modes.)
detay 1 days ago [-]
this post made me smile. why not!!! 6502 my first processor. <3
rbanffy 1 days ago [-]
6502 based computers shouldn’t have a “dir” command. It’s “catalog” for detailed info or “cat” for the short one.
vardump 20 hours ago [-]
No, it should be

  LOAD "$", 8
rickcarlino 24 hours ago [-]
Recently purchased a Pocket8086 and I can say – these sorts of things are _very_ fun.
zahlman 1 days ago [-]
> 46K RAM

Not 64?

(Edit: I see part of the address space is reserved for ROM, but it still seems a bit wonky.)

wkjagt 23 hours ago [-]
The 6502 doesn't have separate io addresses so you need to fit all devices in a 64k space, not just ROM.
p0w3n3d 11 hours ago [-]
Atari 130XE used bank switching to handle more memory along with the IO-reserved memory (i.e. you had an address $D301 where you would change bits for the memory bank, and it would redirect $4000 – $7FFF to another bank in the extended memory)
wkjagt 4 hours ago [-]
Yeah the Commodore 64 did something similar. I love that era of hardware.
facorreia 1 days ago [-]
This would have been absolutely mind blowing back in the day!
lloydatkinson 9 hours ago [-]
This was very interesting until I saw it had a Pi Pico for some reason
p0w3n3d 1 days ago [-]
Wow. It's fresh as a rose! Congratulations!
starkeeper 24 hours ago [-]
How about a cassette tape storage?
HardwareLust 23 hours ago [-]
Serious question; Why 6502?
ggm 22 hours ago [-]
BBC Micro, Acorn Atom, Commadore PET. all kinds of home computer. So prime retro material late 70s early 80s. There's a CP/M port. So you can use PIP which traces it's lineage back to the DEC Tops-10 operating system if not beforehand (its a peripheral IO command model, although I think CP/M PIP only shares name)

Add a DIN plug and record programs in Kansas City Standard on a cassette recorder. Could be a walkman. A floppy (full 8" type) was a luxury. Almost a megabyte! imagine what you can do.. when a program is the amount of text you can fit in the VBI of a ceefax/teletext broadcast, or is typed in by hand in hex. Kansas city standard is 300 bits/second and the tape plays in real-time so a standard C60 is like 160kb on both sides if you were lucky: it misread and miswrote a LOT.

I used to do tabular GOTO jump table text adventures, and use XOR screen line drawing to do moving moire pattern interference fringes. "mod scene" trippy graphics!

Thats a mandelbrot in ASCII, the best I've seen, on the web page. Super stuff.

People wrote tiny languages for the 6502. integer only but C like syntax, or Pascal or ALGOL. People did real science in BASIC, a one weekend course got you what you needed to do some maths for a Masters or PHD in some non CS field.

My friends did a lot more of this than me. Because I had access to a Dec-10 and a PDP-11 at work and later Vax/VMS and BSD UNIX systems, I didn't see the point of home machines. A wave I wish I'd ridden but not seeing the future emerge has been a constant failure of mine.

p0w3n3d 11 hours ago [-]
I wrote (mostly copied from printed code and altered) games in BASIC. Too bad I had not enough understanding what could have been done in the assembly language... Now I keep rediscovering them, but it's only for the sake of nostalgia (and personal development)
wvenable 21 hours ago [-]
The 6502 is the best 8bit CPU for learning stuff. There's a lot you could add to it, but there is very little could take away. It's minimal but you have everything you need.
lysace 1 days ago [-]
Good timing. My current weekend project is constructing something similar to the the first third of Ben Eater's 6502 design (last weekend was the clock module plus some eccentricities).

It occurred to me that given the 6502's predictable clock cycle timings it should be possible to create a realtime disassembler using e.g. an Arduino Mega 2560+character lcd display attached to the 6502's address/data/etc pins.

Of course, this would only be useful in single-stepping/very slow clock speeds. Still, I think it could be useful in learning how the 6502 works.

Is there relevant prior work? I'm struggling with my google fu.

aa-jv 13 hours ago [-]
Nice one - the prototype sure reminds me of the early OpenPandora days ..
drkrab 1 days ago [-]
Way cool! When can I buy one?
xx__yy 20 hours ago [-]
Legend!!!
engineer_22 21 hours ago [-]
TIL Atari Lynx was a handheld competitor to GameBoy .... Was launched with a 65C2 processor

https://en.wikipedia.org/wiki/Atari_Lynx

kayo_20211030 1 days ago [-]
Complete madness! But, I love it.
user3939382 1 days ago [-]
I love this! I’ve been working on a 6502 kernel. I have an arch trick to give the 6502 tons of memory so it can do a kind of Genera-like babashka lisp machine.
JPLeRouzic 6 hours ago [-]
> "I have an arch trick to give the 6502 tons of memory"

Please, what is your trick, is it a variation on bank memories?

einpoklum 1 days ago [-]
And it mostly runs Microsoft software, too... Basic from 1977 :-P
Tor3 1 days ago [-]
It does not run Microsoft software at all, as far as I can tell. EhBasic isn't Microsoft Basic, ehbasic was written by Lee Davison. And this particular version was further enhanced (see github). And wozmon was obviously written by Woz.. not Microsoft.
jdswain 1 days ago [-]
There has been some discussion around this, and Lee Davison is no longer with us so that makes it more difficult. It appears from the source code that Lee's independent basic is highly based on Microsoft Basic. I'm sure it is no longer an issue, especially as Microsoft has provided a free license for Microsoft 6502 basic, but the licensing situation is not entirely clear.
analog8374 1 days ago [-]
It's commodore 64 ish. I like it
Narishma 23 hours ago [-]
More like Commodore 46.
bananaboy 21 hours ago [-]
Wow! Now this is cool!
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:43:31 GMT+0000 (Coordinated Universal Time) with Vercel.