Ah, that explains this patchset that was submitted to the Linux kernel today
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on
s390 architecture, we aim to expand the platform's software ecosystem. This
initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU
virtualization on s390....."
I’ve been running VM/370 and MVS on my RPi cluster for a long time now.
kraftverk_ 11 hours ago [-]
Cool, can you share more about the setup?
rbanffy 7 hours ago [-]
A 4x RPi Zero Ws Docker Swarm cluster running the dockerised versions of Hercules with VM/370 Sixpack, VM/370 CE and MVS TK 4. All in an IKEA picture frame.
trebligdivad 1 days ago [-]
Oh that's a weird way to do it; they used to have an x86 add on block for mainframes which was just a pile of x86 blades with some integration.
bombcar 1 days ago [-]
I loved the era of "daughter cards" which were just entire computers on a board.
From the perspective of PC building, I've always thought it would be neat if the CPU/storage/RAM could go on a card with a PCIe edge connector, and then that could be plugged into a "motherboard" that's basically just a PCIe multiplexer out to however many peripheral cards you have.
Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.
I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.
yjftsjthsd-h 1 days ago [-]
Oh, going back to a backplane computer design? That could be cool, though I assumed we moved away from that model for electrical/signaling reasons? If we could make it work, it would be really cool to have a system that let you put in arbitrary processors, eg. a box with 1 GPU and 2 CPU cards plugged in
mikepurvis 1 days ago [-]
I believe PCIe is a leader/follower system, so there'd probably be some issues with that unless the CPUs specifically knew they were sharing, or there was a way for the non-leader units to know they they shouldn't try to control the bus.
bombcar 1 days ago [-]
But if we're dreaming, we can have the backplane be actually multiple (Nx thunderbird 5 cables connected each slot to all other slots directly).
Then each device can be a host, a client, at the same time and at full bandwidth.
jasomill 11 hours ago [-]
If every device is directly connected to every other one of n devices with Thunderbolt cables, each with its own dedicated set of PCIe lanes, you'd be limited to 1/n of the theoretical maximum bandwidth between any two devices.
What you really want is for every device to be connected through a massive PCIe switch that allows PCIe lanes to be connected arbitrarily, so, e.g., a pair of EPYCs could communicate over 96 lanes with 32 lanes free to connect to peripheral devices.
SoftTalker 24 hours ago [-]
That's basically what S-100 systems were, isn't it (on a much slower bus)?
tssva 14 hours ago [-]
There were also PC compatible systems based around ISA backplanes. This was especially common for industrial computers but Zenith/Heathkit made ISA backplane based systems for the business and consumer markets. I own a Zenith Z-160 luggable computer from 1984 which uses an 8 slot 8-bit ISA backplane. 1 slot is occupied by a CPU card which also has the keyboard connector. My system has 2 memory cards which each provide up to 320k along with a serial and parallel port. Zenith sold a desktop version of this as the Z-150. They later released models based upon 16-bit ISA backplanes. I think but am not sure of the top of my head that the last CPU they produced a 16-bit card for was the 486.
BirAdam 20 hours ago [-]
Yes, but also in many other scenarios. The last backplane systems I saw were 90s industrial 486s.
bombcar 1 days ago [-]
This was (is?) done - some strange industrial computers for sure and I think others, where the "motherboard" was just the first board on the backplane.
The transputer b008 series was also somewhat similar.
throwup238 1 days ago [-]
That would crush latency on RAM.
mikepurvis 23 hours ago [-]
The RAM and CPU would still be on the same card together, and for the typical case of a single GPU it would just be 16x lanes direct from one to the other.
For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.
23 hours ago [-]
dcrazy 18 hours ago [-]
[dead]
Teever 1 days ago [-]
That's what I was hoping Apple was going to do with a refreshed Mac Pro.
I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.
The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.
This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.
raw_anon_1111 1 days ago [-]
You can do basically that by connecting over Thunderbolt 5
This is the kind of glorious thing that will only appear when Moore's law is dead and buried.
wat10000 1 days ago [-]
Now we have cables that include computers more powerful than an old mainframe. So if it pleases you, just think of all the tiny little daughter computers hooked up to your machine now.
But I wonder if this is "much better" than x86 emulation or virt?
Is there really SW that's limited to (Linux) ARM and not x86?
Jarwain 1 days ago [-]
Technically aren't most android apps limited to ARM?
toast0 1 days ago [-]
There's certainly some, but I don't think most.
I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.
wmf 1 days ago [-]
Probably Intel and AMD aren't willing to do this deal but Arm is.
kev009 24 hours ago [-]
IBM actually owns x86 rights still. They last used it to do something similar called Lx86 which ran x86 VMs on POWER CPUs.
wmf 23 hours ago [-]
Developing a good x86 CPU is far beyond IBM's abilities. The rights aren't enough.
kev009 23 hours ago [-]
Price competitive to AMD and intel? Sure. Abilities? There is no magic, the Tellium and Power11 are each as complicated as something like Epyc and the former has both a longer and taller compatibility totem pole than x86.
Anyway this post was never about building ARM or x86 CPUs, the point is they could have done a zArch fast path for x86 for "free", so there is some other strategy at play to consider doing it with ARM.
MikePlacid 1 days ago [-]
> Is there really SW that's limited to (Linux) ARM and not x86?
MacOS? (hides)
mykowebhn 1 days ago [-]
This is a serious question. What does IBM, in fact, do? I'm surprised they are still around and apparently relevant. Are they more or less a services and consulting company now?
roncesvalles 1 days ago [-]
Putting consumer grade (aka "commodity") hardware in a datacenter and running your infra on it is a bit of a meme, in the sense that it's not the only way of doing things. It was probably pioneered/popularized by Google but that's because writing great software was their "hammer", ie they framed every computing problem as a software problem. It was probably easier for them (= Jeff Dean) to take mediocre hardware and write a robust distributed system on top instead of the other way around.
There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software. That's what a mainframe is. You can write a simple and easy to maintain single process backend program, run it on a mainframe and be fairly confident that it can run without stopping for decades. Everything from the power supply to the CPU is redundant and can be hot swapped without booting the OS. Credit card transactions and banking software run on this model for example (just think about how insanely reliable credit card transactions are).
IBM has a monopoly in the second world. You could say the entire field of distributed systems is one big indie effort to break free of IBM's monopoly on computing.
vbezhenar 1 days ago [-]
What I think today people do:
1. They run complicated infrastructure software, written by third-party developers.
2. And they run their own simple programs on top of them.
So for example you can rent Kubernetes cluster from AWS and run simple HTTP server. If your server crashes, Kubernetes will restart it, so it's resilient. There will be records in some metrics which will light up some alerts and eventually people will know about it and will fix it.
Another example: your simple program does some REST GET query. This query failed for some reason. But that query was intercepted by middleware proxy and that proxy determines that HTTP response was 5xx, so it can retry it. So it retries it few times with properly calibrated duration and eventually gets a response and propagates it back to the simple program. Simple program had no idea about all the stuff happening to make it work, it just threw HTTP query and got a response.
There's a lot of complicated machinery to enable simple programs to be part of resilient architecture. That's a goal, anyway.
zozbot234 1 days ago [-]
> There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software.
You actually need both, the point of the extremely resilient hardware is that it can act as the single source of truth when you need it - including perhaps hosting some web-based transactions that directly affect your single source of truth. (Calling this a "model" for web-based infrastructure in general would be misleading though: a credit card transaction on the web is not your ordinary website! The web is just an implementation technology here.) Everything else can be ephemeral open systems, which is orders-of-magnitude cheaper.
throwaway27448 1 days ago [-]
> Credit card transactions and banking software run on this model for example
TSYS is super expensive and is dying out. The current generation of banking software is very much shifting to distributed software across commodity data centers.
throwawaypath 1 days ago [-]
Current generation of banking software is expanding on the mainframe:
IBM Z mainframes play a pivotal role in facilitating 87% of global credit card transactions, nearly $8 trillion in annual payments, and 29 billion ATM transactions each year, amounting to nearly $5 billion per day. Rosamilia highlighted the continuous growth in demand for capacity over the past decade, which has seen inventory expand by 3.5 times.
That post fails to mention Capital One's move from IBM mainframes to AWS was one of the reasons they suffered one of the largest data breaches in history.
Red Hat OpenShift (IBM) is what a lot of banks have settled on. Red Hat went all in maybe 5+ years ago in capturing those institutions.
VorpalWay 1 days ago [-]
Ah, that explains why IBM bought RedHat. Or at least one reason for doing so.
esseph 1 days ago [-]
I'd imagine close to 95% in the US, if they're running important workloads on prem on Linux, it's on RHEL. A staggering number of VMs and bare metal.
esseph 21 hours ago [-]
(Clarification: I'm not saying 95% of all US company Linux workloads are RHEL, not even close.
I'm saying a huge percentage of high criticality (risk of loss of life / high financial risk) are, simply because of support and the name.)
nineteen999 11 hours ago [-]
Exactly. The exact opposite of the people flogging internet widgets running on a bunch of AWS instances running Arch/Ubuntu/Cheap distro of the week. Unfortunately that contingent is massively over-represented here on HN.
Bigpet 1 days ago [-]
Is that in addition to mainframes or for completely replacing them?
zhengyi13 1 days ago [-]
Probably both, to respond to the risk tolerances of any given org.
esseph 1 days ago [-]
Both
Some stayed at on prem, some pushed code to mainframe VMs in the cloud, some went to OpenShift (mostly on prem from what Ive seen, probably 80-85%).
bitwize 1 days ago [-]
I work in banking. We provide modern solutions for small local banks in the US. That's how our core runs. It's just Java apps (Spring Boot, Jakarta EE) running in the cloud.
jhallenworld 23 hours ago [-]
How well do commodity systems protect your financial transactions from cosmic ray-induced bit errors?
stronglikedan 23 hours ago [-]
How often do you hear about them? Now divide that by the millions (billions?) of daily transactions to get an approximate error rate. That's about how well they are protected.
Nursie 1 days ago [-]
> Credit card transactions and banking software run on this model for example
Eh, they can but even a couple of decades ago there was a shift to open platforms. 90s and early 00s, sure, it was mainframe and exotic x86 species like Stratus machines. But even then the power of “throw a ton of cheaper Unix at it” was winning.
Banks’ central systems maybe, I have less experience there. IBM did also try for a while to ride the Linux virtualisation wave as well, saying “hey, you can run thousands of Linux instances on a single mainframe”, and I did some work porting IBM software to s390 Linux around 2007.
mghackerlady 1 days ago [-]
x86 servers weren't that common in the 90s and early 200s, that was all sun or the other commercial unix peoples things
greedo 1 days ago [-]
Sun was dying in 2000. I was busy deploying BSD and a bit later Linux for all our x86 gear.
pjmlp 1 days ago [-]
Meanwhile in 2000 we only considered Linux good enough to host our MP3 file server and quake for the late nights.
All our production stuff was being deployed on Aix, HP-UX, Solaris and Windows NT/2000 Server.
Likewise most of my university degree used DG/UX and Solaris, when Red-Hat Linux was first deployed on the labs, it was after the DG/UX server died, and I was already on the fourth year of a five year degree.
greedo 1 days ago [-]
Well we were a small startup, and the idea of using AIX was a non-starter. Solaris was lovely, but our E250 was only for mail, and in hindsight we should have stood up a FreeBSD server with dovecot or something instead of a system that we migrated off of a year later.
We did use NT/2K internally but that was because we had some who insisted on using SMB via Windows.
Such fun times. The nix and nix-like OSes were spreading like fire. I never would have thought I'd ever wrangle them for the majority of my career.
mghackerlady 1 days ago [-]
Java was exploding and sun machines were the server platform at the time. Yes, the dot com bubble burst and their stock was in freefall but all the things deployed to sun that survived the bubble didn't just disappear or move to X86 overnight
greedo 1 days ago [-]
Well you can say the same about COBOL...
Just because things hung around didn't mean that Sun/Solaris/Java were long for this world. Linux/x86 was just too cheap compared to SPARC gear. Even if it wasn't as robust as the Sun gear, it just made too much sense especially if you didn't have any legacy baggage.
Nursie 1 days ago [-]
In the 90s, perhaps not massively, but gaining ground very early in the 00s. I started my career in 2000 and most of the credit-card related stuff I built until ‘05 was targeted at Windows, Linux and Solaris, with a variety of other Unix platforms depending on the client/project.
But the x86 I was referring to in my comment above, Stratus, was (maybe still is?) an exotic attempt to enter the mainframe-reliability space with windows. IIRC it effectively ran two redundant x86 machines in lockstep, keeping them in sync somehow, so that if hardware on one died the other could continue. I have no idea how big their market was, but I know of at least one acquirer/issuer credit card system that ran on that hardware around 2002-3.
nineteen999 11 hours ago [-]
Stratus VOS ran on a bunch of non-x86 hardware, i860, PA-RISC, 68000. It wasn't Windows (UNIX admin with a modicum of Stratus VOS experience in production, back in the day).
Nursie 10 hours ago [-]
It seems I encountered the “ftServer” line, which on closer inspection launched in 2001, and was indeed intel/windows 2k, based around Pentium III Xeon Chips.
IIRC the Stratus/Model 88 was Moto 68K chips, not x86? I worked on them for years on wall st. - really nice machines! :-D
Nursie 17 hours ago [-]
The ones I encountered (and I never worked on them directly) were tandem-x86 systems and ran windows.
According to Wikipedia they launched in 2002, so I guess they were quite new when I saw them in 03.
Cthulhu_ 1 days ago [-]
A better question would probably what they don't do; just going off the wiki page (https://en.wikipedia.org/wiki/IBM) for recent history, they're in health care (imaging), weather, video streaming, cloud services, Red Hat, managed infrastructure (which branched off into a company called Kyndryl, which has 90.000 employees in 115 countries), warfare ("In June 2025, IBM was named by a UN expert report as one of several companies "central to Israel's surveillance apparatus and the ongoing Gaza destruction.""), etc etc etc.
Basically they do a lot, but they're not showy about it.
pjmlp 1 days ago [-]
Own Red-Hat, thus major contributions to Wayland, GNOME, GCC and Java, at very least.
Have their own Java implementation, with capabilities like AOT before OpenJDK got started on Leyden, or even Graal existed, for years had extensions for value types (nowadays dropped), and alongside Azul, cluster based JIT compiler that shares code across JVM instances.
IBM i and z/OS are still heavely deployed in many organisations, alongside Aix, and LinuxONE (Linux running on mainframes and micros).
Research in quantum computing, AI, design processes, one of the companies that does huge amounts of patents per year across various fields.
And yes a services company, that is actually a consortium of IBM owned companies many of each under a different brand (which is followed by "an IBM company").
progmetaldev 15 hours ago [-]
I think a large number of people seem to forget the trust that companies have built on IBM over decades. The mainframe market is IBM, where IBM already had a hold. People want to believe that dropping such a large company could be done with a rewrite, but as long as IBM is there to support what they already have in place, it makes it unlikely for companies to move away. Obviously a team that has experience moving away from IBM technology to something more "modern" could go with another platform running on different hardware, but you don't hear about those migrations too much because they are rare (for a reason, IBM also offers support that companies love to cling on to).
I don't blame companies that already tied up in IBM tech for sticking with what they already have. As boring and dated as IBM tech might be, it's still running a ton of infrastructure, and you don't get to be that kind of company without being solid and reliable. That's what companies want, even if a development team wants to flex their skills in something new and not tied to IBM.
phrotoma 1 days ago [-]
Early in my career I spent some years working at the biggest bank in Canada, they were (and still are) an enormous IBM customer. Hardware, software, consulting, and probably lots of other things I had no visibility into.
Beneath the countless layers of VMs and copious weird purpose built gear like Tandem and Base24 for the ATMs was a whole bunch of true blue z/OS powered IBM mainframes chugging through thousands and thousands of interlocking COBOL programs that do everything from moving files between partner banks all over the world, moving money between accounts, compounding interest, and extracting a metric shitton of every type of fee imaginable.
If you know z/OS there's work available until your retirement. Miserable, pointless, banal, and archaic legacy as fuck mainframe work.
I don't how exaggerated this story is, but one of my buddies did his internship at TD. One of his skip managers told him if you know COBOL there are departments that will give you a blank cheque during salary ngotiation.
phrotoma 1 days ago [-]
Yeah it's hard to say but I believe there's at least some truth to that. I took COBOL off my resume over a decade ago just to combat the volume of recruiters trying to drag me away from the cloud back to on-prem land.
A good friend of mine who worked on a CICS based credit card processing application at that bank doubled his salary twice inside of 4 yrs. First by quitting the bank and going to a boutique consultancy to build competing software (which they sold to other banks) and then by quitting that job and coming back to the bank to takeover the abysmal state the CICS app had lapsed into in his absence.
And that was circa 2010.
One thing that was true of the bank then and I'm sure is true now is that when they see a nail they truly have just the one hammer. When a problem comes along, hit it with a huge sack of cash until it goes away.
vbezhenar 1 days ago [-]
I don't think "know COBOL" is enough. I'm pretty sure I can learn COBOL in a week. It's more about "know COBOL and know all this old stuff like CLIs, etc, and know all these old approaches".
zozbot234 1 days ago [-]
Typically it's not just about knowing COBOL as a language, the bottleneck is having real expertise wrt. highly specific, fiddly proprietary frameworks that are implemented on top of COBOL.
nunez 1 days ago [-]
Not sure if this is still the case, but Dillard's (US retailer) had a COBOL training program for undergrads as recently as six years ago
3yr-i-frew-up 1 days ago [-]
Amazing to know AI has eliminated this role that used to have blank cheque.
chasd00 1 days ago [-]
> purpose built gear like Tandem
Tandem! Now there's a name i haven't heard in a long time. A college friend of mine worked with some of their stuff right out of college and I still remember him telling me about it. It seemed like magic, we were both floored with the capabilities.
/we were in our early 20s and the inet was just taking off so there were lots of "magic" everywhere
The Remarkable Computers Built Not to Fail by Asianometry
functional_dev 1 days ago [-]
is it that bad?
maybe that is a secret for a long life. I want a job that never disappears :)
phrotoma 1 days ago [-]
Man ... this question hits me really hard. I was absolutely miserable by the end of my years at the bank, and the part that really fucked me up was that (at the time) I could not understand why all my colleagues weren't.
Huge generalizations incoming, there are exceptions to every rule, but in my experience there are no nerds who love tech for tech's sake in the banking world. It's entirely staffed by the "C's get degrees" crowd who just want to clock in, clock out, keep their head down, and retire with a nice pension.
I wanted to work on sexy technology, wrangle clouds, contribute to open source, and hack in modern languages.
I have many friends who are still at that bank 20 yrs later. They're all directors of this that or the other thing, still just grinding out some midlevel whatever career and cruising comfortably. If that ticks all your boxes then by all means go hit up a bank job.
By the time I left I couldn't drink enough liquor in a day to rinse the stench of that job off me. If I hadn't managed to slip that place I'd be dead of liver failure by now.
It's the secret for a long life for some folks, but it ain't for everybody.
progmetaldev 15 hours ago [-]
Fresh out of college, I had an interview for a job working with COBOL. There were classes being held to teach people development, as well as how to maintain existing COBOL software. It was between myself and another recruit, and that other recruit already had COBOL experience. Naturally, I was not chosen, over someone who already have knowledge of the working language being used.
Although I probably don't make nearly as much as that COBOL developer over 20 years later, I would be willing to bet that I am happier and haven't locked myself into a specific technology the way that developer probably has. Money is great, but if you actually care about what you do, I expect that being stuck on the same codebase for years isn't too satisfying (at least on code you didn't have a hand in creating from the very start). Too many people translate money into happiness, and I guess there is a balance there, but usually it's not possible to maintain happiness based off money when you do the same thing day in and day out.
nineteen999 11 hours ago [-]
You don't want to work for IBM unless your life is already over. Been there, done that, never again. It's depressing as hell. Your manager doesn't understand what you do, and they think that once your contract expires you'll be sitting around for weeks waiting for them to renew it.
Frieren 1 days ago [-]
IBM has more revenue than Oracle even if we hear way less about it. 5 times smaller than Apple, thou. It also has more employees than Microsoft or Alphabet. But it has tighter profit margins than other tech companies.
IBM is not in consumer products nor services so we do not hear about it.
gpapilion 1 days ago [-]
It’s a very different company post the PwC purchase. They have around 1/3 of the revenue from consulting which tends to push the valuation down due to its relative low margin when compared to software. This also inflates the number of employees.
lotsofpulp 1 days ago [-]
Oracle/TSMC/SpaceX isn’t in consumer products/services, but they are heard about.
IBM was declining for 10 years while the rest of the tech related businesses were blowing up, plus IBM does not pay well, so other than it being a business in decline, there wasn’t much to talk about. No one expects anything new from IBM.
Also, they had quite a few big boondoggles where they were the bad guys helping swindle taxpayers due to the goodwill from their brand’s legacy, so being a dying rent seeking business as opposed to a growing innovative business was the assumption I had.
freedomben 1 days ago [-]
SpaceX is pretty heavily in consumer products/services now that Starlink is big. But otherwise yes you are correct.
hsbauauvhabzb 1 days ago [-]
They also helped the nazis
bargainbin 1 days ago [-]
I work for a big international corp. We pay IBM a blankest sum annually because it’s that hard to quantify just how much we rely on their services and licensing costs.
Licensing of course just being typical rent seeking behaviour but their services are valuable given the financial impact if one of their solutions goes down on us (which is very rarely)
JoachimS 1 days ago [-]
Everything. They have done for decades, and will do for decades. And what IBM focus on is probably worth looking into.
IBM (imho) is in the absolute frontline in quantum computers. One could argue if the number of startups in QC means that there is an actual market or not. Companies that lives on VC or the valuation of their stock.
But IBM is not showy, not on the front pages, does not live on VC or stock valuation. IBM makes tons of money decade after decade from customers that are also not showy but makes tons of money. Banks, financial institutions, energy, logistics, health care etc etc. If IBM thinks these companies will benefit from using QC from IBM (and pay tons of money for it), there is quite probably some truth in QC becoming useful in the near future. Years rather than decades.
IBM have run the numbers and have decided that spending the money for engineering, research required is outweighs the money possible to earn on QC services. QCs powerful enough to run the QC-supported algorithms these companies need to make more tons of money. And it's probably not breaking RSA or ECC.
sophacles 24 hours ago [-]
> And it's probably not breaking RSA or ECC.
Evidence for this is in the number of articles that talk about simulated annealing/quantum annealing (or other optimization problems) w/r/t QC rather than crypto. Sure attention seeking headlines always focus on prime factoring, and the security aspect has a lot more enthusiast interest, but when you look past that into deeper stuff, a lot of the focus is on the optimization.
And many industries can dramatically benefit from better optimization - think about how many companies are at their core bin-packers or traveling salesmen.... off the top of my head anything in logistics, airlines, many aspects of the energy sector, and on and on.
The flash is in reading secrets, the money is in quantum annealing.
MILP 23 hours ago [-]
Maybe they want to add quantum to CPLEX!
jeswin 1 days ago [-]
They design their own CPUs, and they sold $15b of hardware last year. Tellum ii in the z17 mainframe is a Samsung 5nm part.
What I don't get however is who'd use their custom accelerators for AI inference.
eru 1 days ago [-]
Anyone who can't get any better AI accelerators elsewhere? Last I heard, these things were sold out for years on end. And anyone who can make one, can sell them.
shrimppersimmon 1 days ago [-]
They design and build not one but two CPU architectures, s390/Z and POWER.
Both have been around for many years, but neither is obsolete, they're just not designed for consumer applications.
They still generate $10-15 billion per year in revenue.
eru 1 days ago [-]
Power was used in customer applications a long time ago? I think Apple used them for a while and so did some game consoles?
shrimppersimmon 1 days ago [-]
Yes. Apple used PowerPC, and PowerPC was also in the Xbox 360, PS3, Wii, and Wii U. It was also widespread in embedded sectors like networking, automotive, and aerospace.
IBM eventually stepped away from the embedded market and eventually lost their foothold in consoles as well. While Raptor did offer Power9 systems at a somewhat accessible price point, the IBM-produced CPUs were still fundamentally enterprise-grade hardware, meaning they retained the high costs and "big iron" features of server tech.
mghackerlady 1 days ago [-]
What I wouldn't give for raptor money... they've gotten more and more expensive as time went on
kstrauser 1 days ago [-]
Sort of, in the form of PowerPC, which was an Apple-IBM-Motorola (“AIM”) collaboration. It’s closely related to IBM’s Power line, but more like a predecessor than a sibling.
dcrazy 18 hours ago [-]
Wasn’t PowerPC cut down from POWER?
kitd 1 days ago [-]
They also designed the Cell CPU used in Nintendo Wiis, among others.
tempest_ 1 days ago [-]
Cell was PS3 and the Will used a Power cpu.
IBM had a hand in both however
stewarts 1 days ago [-]
[dead]
panick21_ 1 days ago [-]
The designed many, many more CPU architectures.
ghaff 1 days ago [-]
So they had $30 billion in software revenue last year and $15 billion in infrastructure against $20 billion in consulting.
guenthert 1 days ago [-]
You don't read much about IBM here, but this is the wrong site to look for them. A big chunk of IBM's business comes from other businesses outside the IT industry. You're more likely to read about IBM in the Wall Street Journal; Google finds "IBM" at wsj.com about 48000 times (it finds "oracle" there about 30000 times).
seanmcdirmid 1 days ago [-]
IBM is known as a toxic tech along with Palantir and Oracle. We talk about IBM on HN, but mostly in negative contexts.
kev009 23 hours ago [-]
With each passing CEO it seems to get more and more nebulous. But the mainframes are still technically interesting and they seem to be able to attract and retain quality CPU designers.
nineteen999 11 hours ago [-]
Im surprised how often this question is asked by people who really have no clue and refuse to do even the most basic research on a company.
lmpdev 1 days ago [-]
I was surprised to find out they still have hardware repair technicians (extremely expensive but reliable: ~$400 per computer around 2022 iirc)
But yes they’re mostly enterprise/services/mainframes not anything overly consumer
quietsegfault 1 days ago [-]
No, IBM has Unisys contractors, not employees. All the techs I’ve worked with from IBM have been a nightmare. One dropped an entire drive array on the ground, and tried to install it despite it being bent and no longer fitting on the rack. I have been acquired by IBM twice. They are a nightmare, horrible company.
stonogo 1 days ago [-]
IBM has plenty of hardware techs. They're called system services representatives (SSRs) and if you got a Unisys contractor, that just means you're not spending enough money for IBM to send an SSR.
itake 1 days ago [-]
I own their shares due to their Quantum Computing group
When you’re that large and established it’s very hard to die. I expect IBM to exist in some form pretty much forever
enether 1 days ago [-]
They make $8-9B a year (~90% profit margins) selling software to mainframes, which were deployed ages ago but still have to be maintained because critical COBOL business code was written on their systems - and migration is too riskly/costly.
To give you an idea:
- of the risk in regulated industries like banking: a UK bank was once fined *$62 million* for botching a mainframe migration and causing downtime.
- of the difficulty and risk in non-tech industries: Australia once spent *$120 million* trying to migrate its social security system off mainframes... and failed.
Mainframes are not their only business, of course, but it's a major cash cow that's under appreciated. I, for one, didn't know that business keeps growing.
my company uses as400 and DB2 and pay for their servers. So they still make money from hardware too
1 days ago [-]
1 days ago [-]
esseph 1 days ago [-]
They own things like:
1. Red Hat Enterprise Linux, which is by far the most commonly deployed Linux variant among US Enterprise orgs.
2. Ansible
3. Podman
4. Hashicorp Terraform / Consul / Packer / Vagrant / Nomad / Etc.
5. Giant B2B services arm
6. Mainframe, which a lot of science organizations / governments / credit card companies still run. Sometimes you may have an IBM rep show up to replace a part on the mainframe you didn't even know was broken - very reliable, fault tolerant system.
7. The only service I know where you can rent Quantum computing time in the cloud
8. Probably a ton of other things I'm not even aware of.
9. Red Hat OpenShift - so if you're big enterprise running k8s on prem, there's a good chance it's OpenShift, especially in banking / finance / government.
1 days ago [-]
quietsegfault 1 days ago [-]
They exist to swallow up profitable companies, extract any “unnecessary” overhead (like benefits, PTO, pay that isn’t rock bottom), and package into large enterprise licensing agreements.
eru 1 days ago [-]
Sounds like a pretty good deal for those people who keep starting these 'profitable' companies.
If IBM runs them into the ground, there's a niche for a copy-cat of the original company that you can just found again. Rinse and repeat.
p-e-w 1 days ago [-]
I was shocked when IBM acquired Red Hat a few years ago. I had silently assumed at the time that Red Hat was far bigger than IBM nowadays, so the reverse would have made more sense to me.
freedomben 1 days ago [-]
Google was apparently in the running for acquiring Red Hat. I still wonder what Red Hat would be today if Google had acquired instead.
mghackerlady 1 days ago [-]
much, much worse
freedomben 1 days ago [-]
Yes I agree, given the direction G has been going. I was disappointed at the time, but it was probably a blessing in disguise
mghackerlady 1 days ago [-]
honestly I think it's a net positive (for me at least) because it ensures Fedora has great POWER support (I'll never be able to afford a POWER machine at this rate, but the architecture is an absolute pleasure to work with whenever I have to)
1 days ago [-]
fock 1 days ago [-]
They sell (managed) database appliances (on z and Power) and associated software (think the platform/HANA parts of SAP) - all state-of-the-art in the late 1990s but since then put on maintenance mode and it shows (a bit like oracle...).
Their hardware is still cool custom built silicon and imo state of the art, but since k8s, high-speed-network and multi-TB-machines (for <100k$) are here and run Linux no new venture buys into that anymore (except for gulf states...).
Before, when the competition was a cluster of Itanium/VMS or Sparc/Solaris and the associated contract, noone bought into that either at scale but also noone using IBM had a very compelling reason to switch everything around.
So essentially they sell new hardware and "support" to customers who have been in need to process tabular, multi-GB databases since when a PC was 128MB memory and have been doing electronic record-keeping since the 1970s. They also allow their ~hostages~, ehm, customers who trust them with their data to run processing near the data at a cost/in a cloud style billing model. That is so expensive though that every large IBM-shop has built an elaborate layer of JVMs, Unix and mirror-databases around their IBM appliances. Lately they bought Redhat and hashicorp and confluent thus taking a cut from the "support" of the abominiations of IT systems they helped birth for some more time to come (also remember the alternative JVM OpenJ9, do you all?).
I think the later a company started using centralized electronic record keeping, the higher the likelyhood they are not paying IBM anymore: commercial banks, governments and insurance started digitizing in the 60s (with custom software) and if the companies are old (or in US-friendly petrostates) they are all IBM customers. Corps using ERP or PLM offerings (so manufacturing and retail chains which are younger than banks) used to start digitizing a little later (Walmart only was founded in the 60s and electronic CAD started in the 80s) and while they likely used IBM in the past (SAP was big on DB2) they might not use it anymore (also it helps they usually bought the ERP or PLM from someone else). New Companies whose sole business was to run a digital-platform started on Unix (see Amazon who successfully fought to ditch Oracle even) or just built their whole platform (Google). If those companies predate Unix they usually fought hard to get rid of IBM (Microsoft, Amadeus)
Consulting/outsourcing services have been spun out to Kyndryl, so nowadays IBM only sells hardware, support for their products and ostensibly has some people left to develop their products... The days when that was a big thing and IBM produced all the stuff they sell support for now, have been long gone. A fun link to see how their "product development" operates nowadays is this discussion to bring gitlab-runners to z/OS: https://gitlab.com/gitlab-org/gitlab-runner/-/work_items/275... - tl;dr "hey you opensource company, we are IBM and managed to pay someone to port a go compiler to z/OS. Now we have a customer who wants to use gitlab with z/OS. Would you like to make your software part of our product offering?".
A fun fact is that - even within IBM - access to the real mainframe seems to be very limited which shows a bit in the discussion linked above and also with an ex-Kyndryl-person saying: "oh, I once had a contract where we replaced the mainframe and we ran that on Linux-boxes inside IBM, because it was just cheaper that way. Just the big reporting was a bit slow, but the reliability was just fine"
silvestrov 1 days ago [-]
> dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security
I think we can ignore the "AI" word here as its presence is only because everything currently has to be AI.
So why would IBM add ARM?
> As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments
I think it has become too expensive for IBM to develop their own CPU architecture and that ARM64 is starting to catch up in performance for a much lower price.
So IBM wants to switch to ARM without making a too big fuzz about it.
adrian_b 1 days ago [-]
>So IBM wants to switch to ARM without making a too big fuzz about it.
That was my first thought too, but it does not make sense, because if IBM would sell ARM-based servers nobody would buy from them instead of using cheaper alternatives.
As revealed in another comment, at least for now their strategy is to provide some add-in cards for their mainframe systems, containing an ARM CPU which is used to execute VMs in which ARM-native programs are executed.
So this is like decades ago, when if you had an Apple computer with a 6502 CPU you could also buy a Z80 CPU card for it, so you could also run CP/M programs on your Apple computer, not only programs written for Apple and 6502.
Thus with this ARM accelerator, you will be able to run on IBM mainframes, in VMs, also Linux-on-ARM instances or Windows-on-ARM instances. Presumably they have customers who desire this.
I assume that the IBM marketing arguments for this are that this not only saves the cost of an additional ARM-based server, but it also provides the reliability guarantees of IBM mainframes for the ARM-based applications.
Taking into account that today buying an extra server with its own memory may cost a few times more than last summer, an add-in CPU card that shares memory with your existing mainframe might be extra enticing.
acdha 1 days ago [-]
People buy IBM for the support and exotic features around high-availability and expansion. I think they’d be able to do an ARM migration if needed since they have deep experience with emulation (there is mainframe code from the 1970s running on POWER today on nested emulators) and they have a lot of precedent for their support engineers working closely with customers.
rzerowan 1 days ago [-]
Im thinking maybe as a compliment to x86 offerings and eventual displacement as a primary offering , i do not see them ditching POWER.
The architecture might be non-standard and not very widespread however for what it does and workloads that are suited to it. I dont think any ARM design comes close , maybe Fujitsu's A64FX.
silvestrov 1 days ago [-]
Marketingwise I think it is difficult for IBM to sell x86 systems as it is too easy for customers to compare performance to a standard Wintel server.
Sun had the same problem after 2001 dotcom when standard PC servers became reliable enough to run web servers on.
It's easier to sell "our special sauce" when building using a custom ARM platform. Then you have no easy comparison with standard servers.
The i systems are just POWER machines with different firmware.
tempay 1 days ago [-]
> ARM64 is starting to catch up in performance for a much lower price
Why do you say "starting to"? arm64 has been competitive with ppc64le for a fairly long time at this point
adrian_b 1 days ago [-]
I do not think that I have seen any public benchmark for more than a decade that can compare ARM-based CPUs with IBM POWER CPUs.
The recent generations of IBM POWER CPUs have not been designed for good single-thread performance but only for excellent multi-threaded performance.
So I believe that an ARM CPU from a flagship smartphone should be much faster in single thread that any existing IBM POWER CPU.
On the other hand, I do not know if there exists any ARM-based server CPU that can match the multi-threaded performance of the latest IBM POWER CPUs.
At least for some workloads the performance of the ARM-based CPUs must be much lower, as the IBM CPUs have huge cache memories and very fast memory and I/O interfaces.
The ARM-based server CPUs should win in performance per watt (due to using recent TSMC processes vs. older Samsung processes) and in performance per dollar, but not in absolute performance.
my123 1 days ago [-]
After Power9, IBM became uncompetitive multi-core performance against mainstream server CPUs - both x86 and Arm. They didn't keep up with the rise in core counts.
And the single thread side isn't that good either, but SMT8 is a quite nice software licensing trick
mbreese 1 days ago [-]
I thought PPC was supposed to be highly performant, but not very efficient. I didn’t think ARM (at least non-Apple ARM) was hitting that level of performance yet. I thought ARM was by far more efficient, but not quite there in terms of raw performance.
But I could be wrong… I’m going from a historical perspective. I haven’t checked PPC benchmarks in quite a while.
kjs3 1 days ago [-]
Are you guys sure you're not confusing product lines? PPC is a PowerISA architecture, but hasn't been pushing desktop/server level performance for, what, almost 20 years? It's an embedded chip now, and AFAIK IBM doesn't even make them any more. Power (currently "10th gen"(-ish)) is the performant aarchitecture, used in the computers formally known as i-Series, formerly known as RS/6000. It's pretty fast, not not price competitive. They aren't really the same thing.
adrian_b 1 days ago [-]
"PowerPC" was a modification of the original IBM POWER ISA, which was made in cooperation by IBM, Motorola and Apple.
Motorola made CPUs with this ISA. Apple used CPUs with this ISA, some made by IBM and some made by Motorola.
While Motorola and Apple used the name "PowerPC", IBM continued to use the original name "POWER" for its server and workstation CPUs. Later IBM sold its division that made CPUs for embedded applications and for PCs, retaining only the server/workstation CPUs.
However, nowadays, even if the official IBM name is "POWER", calling it "PowerPC" is not a serious mistake, because all the "PowerPC" ISA changes have been incorporated many years ago into the POWER ISA.
So the current POWER ISA is an evolution of the PowerPC ISA, which was an evolution of the original 1990 POWER ISA.
It is better to call it POWER, as saying "PowerPC" may imply a reference to an older version of the ISA, instead of referring to the current version, but the 2 names are the same thing. PowerPC was an attempt of rebranding, but then they returned to the original name.
kjs3 1 days ago [-]
Thanks for the lecture. My point is that people often confuse PPC in the embedded space (still in production) with Power in the enterprise space (where noone I know refers to it as 'PPC' other than historical artifacts like 'ppc64le' (we run mostly AIX), and haven't since the G5 days). Same/similar ISA, very very different performance expectations. YMMV.
stonogo 1 days ago [-]
There isn't really an arm64 processor available that runs as fast as a Power10 processor, and there isn't really a Power10 processor that runs as efficiently as an arm64 processor, so 'competitive' is probably the wrong word.
homarp 1 days ago [-]
AI= Arm Ibm in that case
3form 1 days ago [-]
That's quite loaded already. They should consider calling it IBM ARM 64, IA-64 in short.
mghackerlady 1 days ago [-]
IBM was one of the few companies not buying the whole itanium nonsense iirc
wmf 1 days ago [-]
IBM wasted plenty of effort on Itanic but at least they were smart enough not to cancel any of their architectures.
formerly_proven 1 days ago [-]
IBM has two architectures which are de-facto only used by them, s390x and ppc64le. They have poured a lot of resources into having open source software support those targets, and this announcement might mean they find it easier/cheaper going forward to virtualize ARM instead and maybe even migrate slowly to ARM.
andrewf 24 hours ago [-]
AIX is still ppc64be. That and s390x are the only big-endian CPUs I can think of which aren't end-of-life, which I think is going to be an increasing maintenance burden over time for IBM alone.
mbreese 1 days ago [-]
I think they see customers wanting to have the flexibility to move to ARM and this is the fastest way to say they support ARM workloads. Maybe this is a path for IBM to eventually use ARM chips down the road, but I see this as being more about meeting customers where they think the demand is today rather than an explicit guess for tomorrow.
mghackerlady 1 days ago [-]
ppc64le has other machines. Raptor off the top of my head, but there's also that weird notebook project that seems to be talked about once every few years and probably won't ever happen and some pretty cool stuff in the amiga space (I don't know if that's strictly le but power is supposed to be bi-endian)
dadoum 19 hours ago [-]
The PCB design for a small desktop computer (which is a step of the notebook project) has been finished 2 weeks ago, and they are trying to get funding to actually manufacture a few prototypes rn [0]
ARM does not erase the compiler and toolchian tail IBM has dragged across two niche arches for years.
Legacy apps on s390x do not move because IBM put out a press release and IBM does not get fatter cloud margins by joining the same ARM pile as other vendors. Mainframe migration is not a weekend project. "Easier" usually means somebody signs a six digit check first.
nxobject 1 days ago [-]
Once you parse the marketing speak, looks like there may be ARM ISA silicon in future System Z.
But, what are their legacy finance-sector customers asking for here? Are they trying to add ARM to LinuxONE, while maintaining the IBM hardware-based nine nines uptime strategy/sweet support contract paradigm?
If so, why don't the Visas of the world just buy 0xide, for example?
> develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.
> "This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxONE systems, reinforcing our end-to-end system design as a powerful advantage."
jlawer 1 days ago [-]
I wonder if we end up with z series running on arm long term.
The value in z series is in the system design and ecosystem, IBM could engineer an architecture migration to custom CPUs based on ARM cores. They would still be mainframe processors, but likely able to be able to reduce investment in silicon and supporting software.
themafia 1 days ago [-]
You can run 1960s System/360 binaries unmodified on modern z/OS. The system also uses a lot of "high level assembler" and "system provided assembly macros" making a complete architecture switch extremely painful and complicated.
They called their new architecture "ESAME" for a while for a pretty obvious reason.
kjs3 1 days ago [-]
I don't think that would change if the underlying architecture changes; IBM has been committed to backward compatibility for a long time. Some hypothetical future mainframe class IBM ARM would undoubtedly be able virtualize a 360/370/390 without breaking a sweat. And ARM will undoubtedly enable IBM to add custom emulation hardware to their spin on ARM if they need it.
bob1029 1 days ago [-]
I think the #1 use case here is allowing AI/cloud workloads the ability to execute against the mainframe's data without ever leaving the secure bubble. I.e., bring the applications to the data rather than the data to the applications.
IBM could put an entire 1k core ARM mini-cloud inside a Z series configuration and it could easily be missed upon visual inspection. Imagine being able to run banking apps with direct synchronous SQL access to core and callbacks for things like real-time fraud detection. Today, you'd have to do this with networked access into another machine or a partner's cloud which kills a lot of use cases.
If I were IBM, I would set up some kind of platform/framework/marketplace where B2B vendors publish ARM-based apps that can run on Z. Apple has already demonstrated that we can make this sort of thing work quite well with regard to security and how locked down everything can be.
iSnow 1 days ago [-]
It is wild how ARM - which was kind of a niche company and ISA - has taken the world by storm since the modern smartphone was born. Now their designs make their way upwards to big iron and AI datacenters.
kjs3 1 days ago [-]
It's what Intel did with x86 a few decades before the modern smart phone.
graemep 1 days ago [-]
Smartphones were a big boost, but they were already growing very rapidly before that.
chrsw 1 days ago [-]
Maybe I don't know enough technical details about these CPU architectures or IP agreements, but I don't see why IBM couldn't have done what Arm did but with PowerPC.
wmf 1 days ago [-]
PowerPC doesn't have the organic ecosystem that ARM has.
chrsw 4 hours ago [-]
Not now. But my question is, what was stopping IBM from doing what Arm did? We are where we are now and it's too late. But as far as I can see, there was nothing too special about Arm as compared to PowerPC back then, on a technical level.
JSR_FDED 11 hours ago [-]
If IBM wants the mainframe to also run AI workloads, why don’t they provide a GPU extension?
JSR_FDED 1 days ago [-]
IBM is desperate to keep the mainframe relevant. The typical transactional workloads are going to stay on the mainframe, and by bolting on ARM “for AI”they’re giving their customer CIOs a reason to defend their decision to stick with the mainframe.
bonzini 1 days ago [-]
This certainly has been in the making for longer than the "everything we do must be for AI" bubble. In fact s390 has its own on-die inference engines and they have access to the same caching mechanisms as the main processor (which are quite insane).
mghackerlady 1 days ago [-]
IBM has been on the AI hypetrain since 2018ish iirc
3yr-i-frew-up 1 days ago [-]
2026 continues to amaze me.
I never would have expected such, but now I'm getting used to it.
I'm waiting for Apple and Microsoft to announce collaboration. They probably already do, but Apple knows its bad for marketing.
I'm not sure I can be surprised anymore.
dev_l1x_be 1 days ago [-]
I miss working on Power platforms. It is such a nice system with openfirmware. The world went another way.
christkv 1 days ago [-]
Arm co processors for main frames?
george_belsky 1 days ago [-]
Nvidia tried, it’s IBM turn now
adolph 1 days ago [-]
I wonder how this relates to Linaro, a joint venture of ARM, IBM, and others started in 2010.
TLDR; “fine, we’ll support Arm too because customers want it.”
ghaff 1 days ago [-]
Is that such a silly notion?
jonkoops 1 days ago [-]
No, but it is a lot of corporate speak for such a simple announcement.
shevy-java 1 days ago [-]
Is that good or bad?
My gut feeling says to lean more on the bad side. I am very skeptic when corporations announce "this is for the win". Then I slowly walk over to the Google Graveyard and nod my head wisely in sadness ... https://killedbygoogle.com/
EdoardoIaga 1 days ago [-]
great
rbanffy 1 days ago [-]
AIX for ARM? ;-)
mghackerlady 1 days ago [-]
Is modern ARM stuff done big-endian? because AIX is exclusively BE iirc
yjftsjthsd-h 1 days ago [-]
That, weirdly, should be fine; ARM is bi-endian in the sense of being perfectly happy to run either way. In fact, the easiest way I know of to test software on a big endian system is to run a perfectly ordinary Raspberry Pi with NetBSD's big endian port for it:)
mghackerlady 1 days ago [-]
Yeah, I know ARM is bi-endian (pretty much all non-x86 archs used nowadays are) but the question is if it's actually enough to have a software base for it. NetBSD having an ARM port in BE is great but most arm stuff is done for LE systems since MacOS, and NT, and most Linux stuff is LE. This isn't that much of a problem in the free software world because we like to test things on obscure architectures but for the kind of proprietary stuff that you'd want to run on arm might have problems (assuming it wasn't ported AIX already)
rbanffy 1 days ago [-]
I never said it'd be an easy port, although there was an x86 (and s/390) port back when time itself was new.
edit: s/390 is big endian.
panick21_ 1 days ago [-]
IBM and 'track record of innovation' ... is a bit of an understatement.
nubinetwork 1 days ago [-]
April fools day was yesterday, IBM.
mafzal9 1 days ago [-]
Arm is trying to expend it's horizons every where as in the previous year ARM acquired the Arduino.
VorpalWay 1 days ago [-]
No, it was Qualcomm who acquired Arduino. While they are an ARM licensee who make ARM chips, they are not ARM.
woadwarrior01 1 days ago [-]
Also, Qualcomm and ARM aren't quite in good terms.
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on s390 architecture, we aim to expand the platform's software ecosystem. This initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU virtualization on s390....."
https://patchwork.kernel.org/project/linux-arm-kernel/cover/...
I’ve been running VM/370 and MVS on my RPi cluster for a long time now.
things like https://www.youtube.com/watch?v=a6b4lYOI0GQ could get you a really interesting form of multitasking
Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.
I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.
Then each device can be a host, a client, at the same time and at full bandwidth.
What you really want is for every device to be connected through a massive PCIe switch that allows PCIe lanes to be connected arbitrarily, so, e.g., a pair of EPYCs could communicate over 96 lanes with 32 lanes free to connect to peripheral devices.
The transputer b008 series was also somewhat similar.
For cases where there are other cards, yes there would more contention, but few expansion cards are able to saturate more than a lane or two. One lane of PCIe Gen5 is a whopping 4 GB/s in each direction, so that theoretically handles a dual 10gige NIC on its own.
I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.
The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.
This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.
https://news.ycombinator.com/item?id=46248644
https://512pixels.net/2024/03/apple-jonathan-modular-concept...
Is there really SW that's limited to (Linux) ARM and not x86?
I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.
Anyway this post was never about building ARM or x86 CPUs, the point is they could have done a zArch fast path for x86 for "free", so there is some other strategy at play to consider doing it with ARM.
MacOS? (hides)
There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software. That's what a mainframe is. You can write a simple and easy to maintain single process backend program, run it on a mainframe and be fairly confident that it can run without stopping for decades. Everything from the power supply to the CPU is redundant and can be hot swapped without booting the OS. Credit card transactions and banking software run on this model for example (just think about how insanely reliable credit card transactions are).
IBM has a monopoly in the second world. You could say the entire field of distributed systems is one big indie effort to break free of IBM's monopoly on computing.
1. They run complicated infrastructure software, written by third-party developers.
2. And they run their own simple programs on top of them.
So for example you can rent Kubernetes cluster from AWS and run simple HTTP server. If your server crashes, Kubernetes will restart it, so it's resilient. There will be records in some metrics which will light up some alerts and eventually people will know about it and will fix it.
Another example: your simple program does some REST GET query. This query failed for some reason. But that query was intercepted by middleware proxy and that proxy determines that HTTP response was 5xx, so it can retry it. So it retries it few times with properly calibrated duration and eventually gets a response and propagates it back to the simple program. Simple program had no idea about all the stuff happening to make it work, it just threw HTTP query and got a response.
There's a lot of complicated machinery to enable simple programs to be part of resilient architecture. That's a goal, anyway.
You actually need both, the point of the extremely resilient hardware is that it can act as the single source of truth when you need it - including perhaps hosting some web-based transactions that directly affect your single source of truth. (Calling this a "model" for web-based infrastructure in general would be misleading though: a credit card transaction on the web is not your ordinary website! The web is just an implementation technology here.) Everything else can be ephemeral open systems, which is orders-of-magnitude cheaper.
TSYS is super expensive and is dying out. The current generation of banking software is very much shifting to distributed software across commodity data centers.
IBM Z mainframes play a pivotal role in facilitating 87% of global credit card transactions, nearly $8 trillion in annual payments, and 29 billion ATM transactions each year, amounting to nearly $5 billion per day. Rosamilia highlighted the continuous growth in demand for capacity over the past decade, which has seen inventory expand by 3.5 times.
https://thesiliconreview.com/2024/04/ibm-new-mainframe-web-t...
[0] https://www.security.org/identity-theft/breach/capital-one/
I'm saying a huge percentage of high criticality (risk of loss of life / high financial risk) are, simply because of support and the name.)
Some stayed at on prem, some pushed code to mainframe VMs in the cloud, some went to OpenShift (mostly on prem from what Ive seen, probably 80-85%).
Eh, they can but even a couple of decades ago there was a shift to open platforms. 90s and early 00s, sure, it was mainframe and exotic x86 species like Stratus machines. But even then the power of “throw a ton of cheaper Unix at it” was winning.
Banks’ central systems maybe, I have less experience there. IBM did also try for a while to ride the Linux virtualisation wave as well, saying “hey, you can run thousands of Linux instances on a single mainframe”, and I did some work porting IBM software to s390 Linux around 2007.
All our production stuff was being deployed on Aix, HP-UX, Solaris and Windows NT/2000 Server.
Likewise most of my university degree used DG/UX and Solaris, when Red-Hat Linux was first deployed on the labs, it was after the DG/UX server died, and I was already on the fourth year of a five year degree.
We did use NT/2K internally but that was because we had some who insisted on using SMB via Windows.
Such fun times. The nix and nix-like OSes were spreading like fire. I never would have thought I'd ever wrangle them for the majority of my career.
Just because things hung around didn't mean that Sun/Solaris/Java were long for this world. Linux/x86 was just too cheap compared to SPARC gear. Even if it wasn't as robust as the Sun gear, it just made too much sense especially if you didn't have any legacy baggage.
But the x86 I was referring to in my comment above, Stratus, was (maybe still is?) an exotic attempt to enter the mainframe-reliability space with windows. IIRC it effectively ran two redundant x86 machines in lockstep, keeping them in sync somehow, so that if hardware on one died the other could continue. I have no idea how big their market was, but I know of at least one acquirer/issuer credit card system that ran on that hardware around 2002-3.
They still list old product sheets here, the oldest being the ftServer 5200 AFAICT - https://www.stratus.com/solutions/previous-generation-produc...
https://www.stratus.com/assets/5200hw.pdf
According to Wikipedia they launched in 2002, so I guess they were quite new when I saw them in 03.
Basically they do a lot, but they're not showy about it.
Have their own Java implementation, with capabilities like AOT before OpenJDK got started on Leyden, or even Graal existed, for years had extensions for value types (nowadays dropped), and alongside Azul, cluster based JIT compiler that shares code across JVM instances.
IBM i and z/OS are still heavely deployed in many organisations, alongside Aix, and LinuxONE (Linux running on mainframes and micros).
Research in quantum computing, AI, design processes, one of the companies that does huge amounts of patents per year across various fields.
And yes a services company, that is actually a consortium of IBM owned companies many of each under a different brand (which is followed by "an IBM company").
I don't blame companies that already tied up in IBM tech for sticking with what they already have. As boring and dated as IBM tech might be, it's still running a ton of infrastructure, and you don't get to be that kind of company without being solid and reliable. That's what companies want, even if a development team wants to flex their skills in something new and not tied to IBM.
Beneath the countless layers of VMs and copious weird purpose built gear like Tandem and Base24 for the ATMs was a whole bunch of true blue z/OS powered IBM mainframes chugging through thousands and thousands of interlocking COBOL programs that do everything from moving files between partner banks all over the world, moving money between accounts, compounding interest, and extracting a metric shitton of every type of fee imaginable.
If you know z/OS there's work available until your retirement. Miserable, pointless, banal, and archaic legacy as fuck mainframe work.
https://en.wikipedia.org/wiki/Tandem_Computers
https://en.wikipedia.org/wiki/BASE24
https://en.wikipedia.org/wiki/Z/OS
A good friend of mine who worked on a CICS based credit card processing application at that bank doubled his salary twice inside of 4 yrs. First by quitting the bank and going to a boutique consultancy to build competing software (which they sold to other banks) and then by quitting that job and coming back to the bank to takeover the abysmal state the CICS app had lapsed into in his absence.
And that was circa 2010.
One thing that was true of the bank then and I'm sure is true now is that when they see a nail they truly have just the one hammer. When a problem comes along, hit it with a huge sack of cash until it goes away.
Tandem! Now there's a name i haven't heard in a long time. A college friend of mine worked with some of their stuff right out of college and I still remember him telling me about it. It seemed like magic, we were both floored with the capabilities.
/we were in our early 20s and the inet was just taking off so there were lots of "magic" everywhere
https://www.youtube.com/watch?v=SSSB7ZTSXH4
The Remarkable Computers Built Not to Fail by Asianometry
Huge generalizations incoming, there are exceptions to every rule, but in my experience there are no nerds who love tech for tech's sake in the banking world. It's entirely staffed by the "C's get degrees" crowd who just want to clock in, clock out, keep their head down, and retire with a nice pension.
I wanted to work on sexy technology, wrangle clouds, contribute to open source, and hack in modern languages.
I have many friends who are still at that bank 20 yrs later. They're all directors of this that or the other thing, still just grinding out some midlevel whatever career and cruising comfortably. If that ticks all your boxes then by all means go hit up a bank job.
By the time I left I couldn't drink enough liquor in a day to rinse the stench of that job off me. If I hadn't managed to slip that place I'd be dead of liver failure by now.
It's the secret for a long life for some folks, but it ain't for everybody.
Although I probably don't make nearly as much as that COBOL developer over 20 years later, I would be willing to bet that I am happier and haven't locked myself into a specific technology the way that developer probably has. Money is great, but if you actually care about what you do, I expect that being stuck on the same codebase for years isn't too satisfying (at least on code you didn't have a hand in creating from the very start). Too many people translate money into happiness, and I guess there is a balance there, but usually it's not possible to maintain happiness based off money when you do the same thing day in and day out.
IBM is not in consumer products nor services so we do not hear about it.
IBM was declining for 10 years while the rest of the tech related businesses were blowing up, plus IBM does not pay well, so other than it being a business in decline, there wasn’t much to talk about. No one expects anything new from IBM.
Also, they had quite a few big boondoggles where they were the bad guys helping swindle taxpayers due to the goodwill from their brand’s legacy, so being a dying rent seeking business as opposed to a growing innovative business was the assumption I had.
Licensing of course just being typical rent seeking behaviour but their services are valuable given the financial impact if one of their solutions goes down on us (which is very rarely)
IBM (imho) is in the absolute frontline in quantum computers. One could argue if the number of startups in QC means that there is an actual market or not. Companies that lives on VC or the valuation of their stock.
But IBM is not showy, not on the front pages, does not live on VC or stock valuation. IBM makes tons of money decade after decade from customers that are also not showy but makes tons of money. Banks, financial institutions, energy, logistics, health care etc etc. If IBM thinks these companies will benefit from using QC from IBM (and pay tons of money for it), there is quite probably some truth in QC becoming useful in the near future. Years rather than decades.
IBM have run the numbers and have decided that spending the money for engineering, research required is outweighs the money possible to earn on QC services. QCs powerful enough to run the QC-supported algorithms these companies need to make more tons of money. And it's probably not breaking RSA or ECC.
Evidence for this is in the number of articles that talk about simulated annealing/quantum annealing (or other optimization problems) w/r/t QC rather than crypto. Sure attention seeking headlines always focus on prime factoring, and the security aspect has a lot more enthusiast interest, but when you look past that into deeper stuff, a lot of the focus is on the optimization.
And many industries can dramatically benefit from better optimization - think about how many companies are at their core bin-packers or traveling salesmen.... off the top of my head anything in logistics, airlines, many aspects of the energy sector, and on and on.
The flash is in reading secrets, the money is in quantum annealing.
What I don't get however is who'd use their custom accelerators for AI inference.
Both have been around for many years, but neither is obsolete, they're just not designed for consumer applications.
They still generate $10-15 billion per year in revenue.
IBM eventually stepped away from the embedded market and eventually lost their foothold in consoles as well. While Raptor did offer Power9 systems at a somewhat accessible price point, the IBM-produced CPUs were still fundamentally enterprise-grade hardware, meaning they retained the high costs and "big iron" features of server tech.
IBM had a hand in both however
But yes they’re mostly enterprise/services/mainframes not anything overly consumer
You can see their roadmap here:
https://www.ibm.com/roadmaps/
To give you an idea:
- of the risk in regulated industries like banking: a UK bank was once fined *$62 million* for botching a mainframe migration and causing downtime. - of the difficulty and risk in non-tech industries: Australia once spent *$120 million* trying to migrate its social security system off mainframes... and failed.
Mainframes are not their only business, of course, but it's a major cash cow that's under appreciated. I, for one, didn't know that business keeps growing.
Coincidentally, I wrote about the topic of mainframes with relation to IBM's acquisition of Confluent here today: https://blog.2minutestreaming.com/p/ibm-confluent-acquisitio...
1. Red Hat Enterprise Linux, which is by far the most commonly deployed Linux variant among US Enterprise orgs.
2. Ansible
3. Podman
4. Hashicorp Terraform / Consul / Packer / Vagrant / Nomad / Etc.
5. Giant B2B services arm
6. Mainframe, which a lot of science organizations / governments / credit card companies still run. Sometimes you may have an IBM rep show up to replace a part on the mainframe you didn't even know was broken - very reliable, fault tolerant system.
7. The only service I know where you can rent Quantum computing time in the cloud
8. Probably a ton of other things I'm not even aware of.
9. Red Hat OpenShift - so if you're big enterprise running k8s on prem, there's a good chance it's OpenShift, especially in banking / finance / government.
If IBM runs them into the ground, there's a niche for a copy-cat of the original company that you can just found again. Rinse and repeat.
So essentially they sell new hardware and "support" to customers who have been in need to process tabular, multi-GB databases since when a PC was 128MB memory and have been doing electronic record-keeping since the 1970s. They also allow their ~hostages~, ehm, customers who trust them with their data to run processing near the data at a cost/in a cloud style billing model. That is so expensive though that every large IBM-shop has built an elaborate layer of JVMs, Unix and mirror-databases around their IBM appliances. Lately they bought Redhat and hashicorp and confluent thus taking a cut from the "support" of the abominiations of IT systems they helped birth for some more time to come (also remember the alternative JVM OpenJ9, do you all?).
I think the later a company started using centralized electronic record keeping, the higher the likelyhood they are not paying IBM anymore: commercial banks, governments and insurance started digitizing in the 60s (with custom software) and if the companies are old (or in US-friendly petrostates) they are all IBM customers. Corps using ERP or PLM offerings (so manufacturing and retail chains which are younger than banks) used to start digitizing a little later (Walmart only was founded in the 60s and electronic CAD started in the 80s) and while they likely used IBM in the past (SAP was big on DB2) they might not use it anymore (also it helps they usually bought the ERP or PLM from someone else). New Companies whose sole business was to run a digital-platform started on Unix (see Amazon who successfully fought to ditch Oracle even) or just built their whole platform (Google). If those companies predate Unix they usually fought hard to get rid of IBM (Microsoft, Amadeus)
Consulting/outsourcing services have been spun out to Kyndryl, so nowadays IBM only sells hardware, support for their products and ostensibly has some people left to develop their products... The days when that was a big thing and IBM produced all the stuff they sell support for now, have been long gone. A fun link to see how their "product development" operates nowadays is this discussion to bring gitlab-runners to z/OS: https://gitlab.com/gitlab-org/gitlab-runner/-/work_items/275... - tl;dr "hey you opensource company, we are IBM and managed to pay someone to port a go compiler to z/OS. Now we have a customer who wants to use gitlab with z/OS. Would you like to make your software part of our product offering?". A fun fact is that - even within IBM - access to the real mainframe seems to be very limited which shows a bit in the discussion linked above and also with an ex-Kyndryl-person saying: "oh, I once had a contract where we replaced the mainframe and we ran that on Linux-boxes inside IBM, because it was just cheaper that way. Just the big reporting was a bit slow, but the reliability was just fine"
I think we can ignore the "AI" word here as its presence is only because everything currently has to be AI.
So why would IBM add ARM?
> As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments
I think it has become too expensive for IBM to develop their own CPU architecture and that ARM64 is starting to catch up in performance for a much lower price.
So IBM wants to switch to ARM without making a too big fuzz about it.
That was my first thought too, but it does not make sense, because if IBM would sell ARM-based servers nobody would buy from them instead of using cheaper alternatives.
As revealed in another comment, at least for now their strategy is to provide some add-in cards for their mainframe systems, containing an ARM CPU which is used to execute VMs in which ARM-native programs are executed.
So this is like decades ago, when if you had an Apple computer with a 6502 CPU you could also buy a Z80 CPU card for it, so you could also run CP/M programs on your Apple computer, not only programs written for Apple and 6502.
Thus with this ARM accelerator, you will be able to run on IBM mainframes, in VMs, also Linux-on-ARM instances or Windows-on-ARM instances. Presumably they have customers who desire this.
I assume that the IBM marketing arguments for this are that this not only saves the cost of an additional ARM-based server, but it also provides the reliability guarantees of IBM mainframes for the ARM-based applications.
Taking into account that today buying an extra server with its own memory may cost a few times more than last summer, an add-in CPU card that shares memory with your existing mainframe might be extra enticing.
The architecture might be non-standard and not very widespread however for what it does and workloads that are suited to it. I dont think any ARM design comes close , maybe Fujitsu's A64FX.
Sun had the same problem after 2001 dotcom when standard PC servers became reliable enough to run web servers on.
It's easier to sell "our special sauce" when building using a custom ARM platform. Then you have no easy comparison with standard servers.
They will probably market the ARM inclusion similarly - as something that the package provides.
As far as POWER i think only Raptor[1] does direct marketingof the power(hehe) and capabilities
[1]https://www.raptorcs.com/
https://www.ibm.com/products/power
The i systems are just POWER machines with different firmware.
Why do you say "starting to"? arm64 has been competitive with ppc64le for a fairly long time at this point
The recent generations of IBM POWER CPUs have not been designed for good single-thread performance but only for excellent multi-threaded performance.
So I believe that an ARM CPU from a flagship smartphone should be much faster in single thread that any existing IBM POWER CPU.
On the other hand, I do not know if there exists any ARM-based server CPU that can match the multi-threaded performance of the latest IBM POWER CPUs.
At least for some workloads the performance of the ARM-based CPUs must be much lower, as the IBM CPUs have huge cache memories and very fast memory and I/O interfaces.
The ARM-based server CPUs should win in performance per watt (due to using recent TSMC processes vs. older Samsung processes) and in performance per dollar, but not in absolute performance.
And the single thread side isn't that good either, but SMT8 is a quite nice software licensing trick
But I could be wrong… I’m going from a historical perspective. I haven’t checked PPC benchmarks in quite a while.
Motorola made CPUs with this ISA. Apple used CPUs with this ISA, some made by IBM and some made by Motorola.
While Motorola and Apple used the name "PowerPC", IBM continued to use the original name "POWER" for its server and workstation CPUs. Later IBM sold its division that made CPUs for embedded applications and for PCs, retaining only the server/workstation CPUs.
However, nowadays, even if the official IBM name is "POWER", calling it "PowerPC" is not a serious mistake, because all the "PowerPC" ISA changes have been incorporated many years ago into the POWER ISA.
So the current POWER ISA is an evolution of the PowerPC ISA, which was an evolution of the original 1990 POWER ISA.
It is better to call it POWER, as saying "PowerPC" may imply a reference to an older version of the ISA, instead of referring to the current version, but the 2 names are the same thing. PowerPC was an attempt of rebranding, but then they returned to the original name.
[0]: https://www.powerpc-notebook.org/2026/03/we-are-ready-for-pr...
Legacy apps on s390x do not move because IBM put out a press release and IBM does not get fatter cloud margins by joining the same ARM pile as other vendors. Mainframe migration is not a weekend project. "Easier" usually means somebody signs a six digit check first.
But, what are their legacy finance-sector customers asking for here? Are they trying to add ARM to LinuxONE, while maintaining the IBM hardware-based nine nines uptime strategy/sweet support contract paradigm?
If so, why don't the Visas of the world just buy 0xide, for example?
> develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.
> "This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxONE systems, reinforcing our end-to-end system design as a powerful advantage."
The value in z series is in the system design and ecosystem, IBM could engineer an architecture migration to custom CPUs based on ARM cores. They would still be mainframe processors, but likely able to be able to reduce investment in silicon and supporting software.
They called their new architecture "ESAME" for a while for a pretty obvious reason.
IBM could put an entire 1k core ARM mini-cloud inside a Z series configuration and it could easily be missed upon visual inspection. Imagine being able to run banking apps with direct synchronous SQL access to core and callbacks for things like real-time fraud detection. Today, you'd have to do this with networked access into another machine or a partner's cloud which kills a lot of use cases.
If I were IBM, I would set up some kind of platform/framework/marketplace where B2B vendors publish ARM-based apps that can run on Z. Apple has already demonstrated that we can make this sort of thing work quite well with regard to security and how locked down everything can be.
I never would have expected such, but now I'm getting used to it.
I'm waiting for Apple and Microsoft to announce collaboration. They probably already do, but Apple knows its bad for marketing.
I'm not sure I can be surprised anymore.
https://en.wikipedia.org/wiki/Linaro
My gut feeling says to lean more on the bad side. I am very skeptic when corporations announce "this is for the win". Then I slowly walk over to the Google Graveyard and nod my head wisely in sadness ... https://killedbygoogle.com/
edit: s/390 is big endian.
https://www.qualcomm.com/news/releases/2025/09/qualcomm-achi...