> Apple Silicon changes the physics. The CPU and GPU share the same physical memory (Apple's Unified Memory Architecture) ... no bus!
Beware the reality distortion field: This is of course how it's worked on most x86 machines for a long time. And also on most Macs when they were using Intel chips.
agambrahma 2 hours ago [-]
Agree, maybe "changes the physics" was too strong, shared cpu/gpu memory is not new.
What is different then is the combination of
1. UMA memory (and yes, iGPU had this, pre-M1)
2. enough bandwidth / GPU throughput for local inference
3. straightforward `makeBuffer(bytesNoCopy:)` path
So, the novelty isn't the shared memory itself, but the whole chain lining up to make the Wasm linear memory -> Metal-buffer approach practical + performant enough.
(and not saying there's some Apple Silicon magic here either ... it'd work anywhere there was UMA and no-copy host-pointer path)
littlecranky67 18 hours ago [-]
Why did all my x86 onboard iGPU reserve a fixed amount of RAM on boot, inaccessible to the OS? Why do dGPU bring their own VRAM and how to directly manipulate it from the CPU without copying?
ben-schaaf 15 hours ago [-]
Correct me if I'm wrong, but that reserved memory is for the framebuffer? The iBoot bootloader also reserves some memory for the framebuffer.
dGPUs bring their own VRAM because it's a different type of memory, allowing them to get higher performance than they could with DDR. The M4 Max requires 128GB of LPDDR5X to reach its ~500GB/s bandwidth. The RX Vega 64 had that same bandwidth in 2017 with just 8GB of HBM2.
fc417fc802 15 hours ago [-]
Nope, the reserved memory is what's available to use from the various APIs (VK, GL, etc). More recently there's OS support for flexible on demand allocation by the GPU driver.
Of course the APIs have allowed you to make direct use of pointers to CPU memory for something like a decade. However that requires maintaining two separate code paths because doing so while running on a dGPU is _extremely_ expensive.
kimixa 7 hours ago [-]
As someone that's worked on GPU drivers for shared memory systems for over 15 years, supporting hardware that was put on the market over 20 years ago, and they've "always" (in my experience) been able to dynamically assign memory pages to the GPU.
The "reserved" memory is more about the guaranteed minimum to allow the thing to actually light up, and sometimes specific hardware blocks had more limited requirements (e.g. the display block might require contiguous physical addresses, or the MMU data/page tables themselves) so we would reserve a chunk to ensure they can actually be allocated with those requirements. But they tended to be a small proportion of the total "GPU Memory used".
Sure, sharing the virtual address space is less well supported, but the total amount of memory the GPU can use is flexible at runtime.
fulafel 18 hours ago [-]
To the first question: blame Windows I guess. But even on older chips, GPU code could access memory allocated on the CPU side so this didn't cap the amount of data your GPGPU code could crunch.
littlecranky67 7 hours ago [-]
I remember this was mostly a BIOS setting how much memory to allocate for iGPU - and once set in the BIOS, that memory was not accessible to the underlying OS (besides GPU I/O).
fulafel 6 hours ago [-]
Yes, but this was to appease Windows, probably older versions and/or 32 bit versions of it.
saagarjha 23 hours ago [-]
I'm curious what this offers over just building the host side code to be native?
agambrahma 1 hours ago [-]
Yes, simply for local inference -- not much, native is the obvious choice.
The value would be in actor processes, where you can delegate inference without paying the 'copy tax' for crossing the sandbox boundary.
So, less "inference engine" and more "Tmux for AI agents"
Think pausing, moving, resuming, swapping model backend.
I scoped the post to memory architecture, since it was the least obvious part ... will follow up with one about the actor model aspect.
saagarjha 1 hours ago [-]
I'm a little confused what an actor process is. To me a process is inherently local?
jsomedon 21 hours ago [-]
My quick guess is that this approach offers near zero overhead for gpu to access data inside sandbox with all the security/privacy benefit of sandbox.
swiftcoder 16 hours ago [-]
For one thing, it's a lot easier to distribute a webpage than a native app
saagarjha 15 hours ago [-]
This doesn't work with webpages though
swiftcoder 10 hours ago [-]
I somehow missed that tidbit
nl 19 hours ago [-]
I'm pretty sure this is just "yes (parts of), memory control in WASM works"[1].
The whole Apple Silicon thing is (in this case) just added details that don't actually matter.
Apple Silicon uses unified memory where the CPU and GPU use the exact same memory and no copies from RAM to VRAM are needed. The article opens with mentioning just that and indeed it is the whole point of the article.
fho 15 hours ago [-]
I am always a bit baffled why Apple gets credited with this. Unified memory has been a thing for decades. I can still load the biggest models on my 10th gen Intel Core CPU and the integrated GPU can run inference.
The difference being that modern integrated GPU are just that much faster and can run inference at tolerable speeds.
(Plus NPUs being a thing now, but that also started much earlier. Thr 10th gen Intel Core architecture already had instructions to deal with "AI" workloads... just very preliminary)
mirekrusin 15 hours ago [-]
That’s shared, not unified, it’s partitioned where cpu and gpu copies are managed by driver. Lunar lake (2024) is getting closer but still not as tightly integrated as apple and capped to 32GB only (Apple has up to 512GB). AMD ryzen ai max is closer to Apple but still 3 times slower memory.
fc417fc802 14 hours ago [-]
Shared vs unified is merely a driver implementation detail. Regardless, in practice (IIUC) data is still going to be copied if you perform a transfer using a graphics API because the driver has no way of knowing what the host might do with the pointed-to memory after the transfer.
If you make use of host pointers and run on an iGPU no copy will take place.
fho 14 hours ago [-]
My last serious GPU programming was with OpenCL. And if my memory does not fail me the API was quite specific about copying and/or sharing memory on a shared memory system.
I am pretty sure that my old 10th gen CPU/GPU combo has the ability to use the "unified"/zero-copy access mode for the GPU.
eis 11 hours ago [-]
I don't think people are crediting Apple with inventing unified memory - I certainly did not. There have been similar systems for decades. What Apple did is popularize this with widely available hardware with GPUs that don't totally suck for inference in combination with RAM that has decent speed at an affordable price. You either had iGPUs which were slow (plus not exactly the fastest DDR memory) but at least sitting on the same die or you had fast dGPUs which had their own limited amount of VRAM. So the choice was between direct memory access but not powerfull or powerfull but strangled by having to go through the PCIE subsystem to access RAM.
The article is talking about one particular optimization that one can implement with Apple Silicon and I at least wasn't aware that it is now possible to do so from WebAssembly - so to completely dismiss it as if it had nothing to do with Apple Silicon is imho not fair.
pjmlp 11 hours ago [-]
Back in the 8 and 16 bit home computer days, or game consoles for that matter it was popular enough already.
And yes things like the Amiga Blitter, arcade or console graphics units were already baby GPUs.
nl 3 hours ago [-]
It's irrelevant because no matter if the system memory is unified or not the point of the article is if WASM adds extra memory copiy.
That's the same no matter the physical memory system architecture.
itamos 16 hours ago [-]
On one side it sounds promising to exploit shared memory properties to speed up inference. But on the other hand, the well established inference engines are perhaps already well optimized to overlap compute and communication efficiently. In this case the host-device copies are likely not a problem to tackle.
trueno 1 days ago [-]
> on Apple Silicon, a WebAssembly module's linear memory can be shared directly with the GPU: no copies, no serialization, no intermediate buffers
enhance
> no copies, no serialization, no intermediate buffers
would it kill people to write their own stuff why are we doing this. out of all the things people immediately cede to AI they cede their human ability to communicate and convey/share ideas. this timeline is bonkers.
Aurornis 23 hours ago [-]
I’ve become overly sensitive to it as well because it’s such a reliable indicator that there are other problems in the work.
I’ve wasted so much time looking at interesting repos this year before discovering that one of the main claims was a hallucination, or that when I got to the specific part of the codebase it just had a big note from the LLM that’s it’s a placeholder until it can figure out how to do the requested thing.
The people who have AI write their articles don’t care if it works or if it’s correct. They’re trying to get jobs and want something quick and interesting that will appeal to a lazy hiring manager. We’re just taking the bait too.
trueno 20 hours ago [-]
> The people who have AI write their articles don’t care if it works or if it’s correct.
I'd build on this: The people who have AI write their articles very likely don't know how their thing works or is correct. High chance they'll stumble when they are expected to speak about whatever it is they are presenting with some authority and demonstration of knowledge. Human to human, not being able to do that = obliterates trust. Places it somewhere near the realm of misinformation, which everyone unilaterally has no interest in consuming.
Good luck to people who want to fluff expertise and present as more-capable for job prospects, the world is shit and I know there's more people who need income than there are jobs that provide for our basic human needs, but this level of AI crutching is just going to bode poorly for those who think this is going to get them where they need to go.
rvz 1 days ago [-]
This sort of obvious pattern is an instant AI dead give-away that I keep on seeing in hundreds of blogs and code posted on this site:
"Here is X - it makes Y"
"That's not X, it's Y."
"...no this, no that, no X, no Y."
Another way of telling via code is by deducing the experience of the author if they became an expert of a different language since...yesterday.
There will be a time where it will be problematic for those who over-rely on AI and will struggle on on-site interviews with whiteboard tests.
bensyverson 24 hours ago [-]
I think the days of on-site interviews with whiteboard tests may be drawing to a close faster than you suspect
JSR_FDED 23 hours ago [-]
Huh, I’m 100% going to interview this way the next time I have to hire an engineer. I can’t think of a better way to get a sense of how a candidate reasons about things, and of their values - do they have a sense of responsibility, conscientiousness, team fit.
All other things that could be LLM-mediated have no more signal.
andsoitis 22 hours ago [-]
> I can’t think of a better way to get a sense of how a candidate reasons about things
Some ideas to help you: ask the candidate something underspecified and watch what they do first. Do they ask clarifying questions, make their assumptions explicit? After they answer ask what would change their mind, where does that break down? Pick a topic they know and ask them to explain it to a smart non-engineer. Make them estimate something they can’t look up (forces them to decompose, bound, and calibrate). Once they’ve proposed a solution to a question, change the constraints to see if they can adapt or whether they’re stuck.
What you want to evaluate is dynamic reasoning, adaptability.
z0r 22 hours ago [-]
Is this implying that you don't believe people will hire programmers anymore?
jhayward 13 hours ago [-]
Well, there is a long tradition of "testing" developer candidates by asking them to exhibit skills in tasks that they never, ever, do in their work. Like whiteboard coding.
It doesn't have a great success record.
I personally would rather they exhibited expert skills in using tools, and expressing their design insight as a part of that skillset.
m00dy 24 hours ago [-]
I also think we will never go back to good old days.
dylan604 21 hours ago [-]
It'll put the "everything old becomes new again" idea to the test.
notepad0x90 22 hours ago [-]
I don't know, to me your sentiment sounds a lot like how back in the day they used to say "you can't just use a calculator all the time, use your brain and show the work on pen and paper".
humans have been using tools to communicate since pre-history. language itself is one tool of communication invented to supersede body-language and grunting and noises. the thought and idea is theirs, it was communicated. Would it be that much different if they used a spellchecker extensively to edit their work?
I get why you're annoyed but is it really such a big deal? random people aren't to blame for whatever other annoyances "AI slop" has created.
trueno 20 hours ago [-]
Calculators have never been the medium in which we communicate our human experience and knowledge transfer. Calculators aren't part of the social fabric or culture. Very 2d extrapolation that somehow resulted in an alleged parallel. Language is woven deeply into civilization and our histories & been a part of our species literal survival against the most unforgiving odds/environments. Using what is effectively a ghost writer nukes trust. You cannot ascertain anything about the person behind the blog if it's clear they used AI to write it. And without that there's no way to infer expertise, rule out hallucinations, falsehoods presented as matter of fact, and the whole broad set of things LLM's get wrong because of their limitations as a technology. I have literally nothing to go off of that would prove this person knows what they are talking about. Why would anyone want to consume that?
Would it kill anyone at all to add a preamble that is forthcoming about using AI to write something? A chance to say these are my ideas and I've used claude to help me state it eloquently because <english is not my first language / i dont write well / claude said it better than i ever could> etc ? Not doing that, presenting as more capable/knowing than one probably is, is what destroys trust immediately the moment it's sniffed out that AI was used to write something.
It's irresponsible, a self-nerf, and it's annoying. Venn diagram there is basically a circle. We're all familiar with how vibe coding appears to weaken your ability to write code, like skipping the gym and expecting good muscle density. All I'm saying is people shouldn't be skipping the gym for literally communicating with each other because there's gonna be a lot of times in life where you're not gonna be able to whip out chat jippity to continue a real conversation with another person. Ceding that turf means you're willingly trading your ability to deal with real life scenarios with other human beings for short term gain. It's funny how the universe tends to find balance. Yeah, being well read and expressing ideas well is a skill, it takes work.
porridgeraisin 17 hours ago [-]
It's not that deep man, it's just a blog post about some software library. There's no civilisational communication going on here, relax. This whole thing will become irrelevant in a few decades before the end of our lifespans. It's just never that deep.
Why does it matter if it's their thought or not. If you currently care about GPU inference from webassembly on apple silicon, you can use this article. That's really about it.
Now if you care about GPU inference from wasm on apple silicon, and you found problems with this articles content, then great, comment about it. If you say that the problem with the content is due to the usual surface level slop LLMs belt out, then great complain about LLMs. But your comment didn't say anything about gpu inference from wasm on apple silicon.
trueno 6 hours ago [-]
[dead]
rdedev 21 hours ago [-]
> language itself is one tool of communication invented to supersede body-language and grunting and noises
That's a pretty utilitarian view of language. How would it feel if everyone spoke and wrote like a PR representative? This is what an article written by an LLM is starting to sound like.
I'm even willing to argue that the way in which you convey your ideas is as important as the idea itself. Like we could all be eating soylent for our daily nutritional requirements but we don't. The taste of the food we eat is important. It's the same with writing for me
ben-schaaf 21 hours ago [-]
> the thought and idea is theirs, it was communicated
Are they? I don't know how much they used AI, the entire article could be written from a one sentence prompt and so I'd argue that the thoughts and ideas are not their own.
This isn't like using a spell checker, it's like using a ghost writer.
21 hours ago [-]
nullsanity 21 hours ago [-]
[dead]
wmf 1 days ago [-]
This works in wasmtime not browsers.
thrill 1 days ago [-]
Why would it not work in a browser?
koolala 7 hours ago [-]
Because its not a feature browser makers support.
m00dy 24 hours ago [-]
it would be hard to share the same memory location with gpu, right ?
junon 24 hours ago [-]
If the browser supported it it could expose it via a buffer view or something, but that'd be quite the security surface area one would think.
pjmlp 20 hours ago [-]
Goodbye WebAssembly "security".
Also, these folks should be amazed by 8 and 16 bit games development, or games consoles in general.
jedisct1 10 hours ago [-]
Doesn't work on web browsers, only with one headless runtime, one one CPU architecture. What's even the point of using webassembly here?
tancop 6 hours ago [-]
loading third party agents in a sandbox with full custom model support. right now you need to either run that code directly (super dangerous) use a vm/container (slow and complicated) or a interpreter like lua (language bound, slow and weak security). wasm is perfect for this, its almost native speed, built for security and language neutral. onnx and coreml are secure but they can only do the actual model not all the code around it.
agambrahma 1 hours ago [-]
Yes, that's the right idea.
It's less about browsers, and more about server/edge/local-agent runtimes.
Wasm lets you have
- sandboxing (untrusted actor code)
- clean snapshot/restore
- portability of actor across machines
If you don’t need those properties, then yes ... native is obviously the better choice
adamsilvacons 11 hours ago [-]
[dead]
EthanFrostHI 22 hours ago [-]
[flagged]
Rendered at 01:26:04 GMT+0000 (Coordinated Universal Time) with Vercel.
Beware the reality distortion field: This is of course how it's worked on most x86 machines for a long time. And also on most Macs when they were using Intel chips.
What is different then is the combination of
1. UMA memory (and yes, iGPU had this, pre-M1) 2. enough bandwidth / GPU throughput for local inference 3. straightforward `makeBuffer(bytesNoCopy:)` path
So, the novelty isn't the shared memory itself, but the whole chain lining up to make the Wasm linear memory -> Metal-buffer approach practical + performant enough.
(and not saying there's some Apple Silicon magic here either ... it'd work anywhere there was UMA and no-copy host-pointer path)
dGPUs bring their own VRAM because it's a different type of memory, allowing them to get higher performance than they could with DDR. The M4 Max requires 128GB of LPDDR5X to reach its ~500GB/s bandwidth. The RX Vega 64 had that same bandwidth in 2017 with just 8GB of HBM2.
Of course the APIs have allowed you to make direct use of pointers to CPU memory for something like a decade. However that requires maintaining two separate code paths because doing so while running on a dGPU is _extremely_ expensive.
The "reserved" memory is more about the guaranteed minimum to allow the thing to actually light up, and sometimes specific hardware blocks had more limited requirements (e.g. the display block might require contiguous physical addresses, or the MMU data/page tables themselves) so we would reserve a chunk to ensure they can actually be allocated with those requirements. But they tended to be a small proportion of the total "GPU Memory used".
Sure, sharing the virtual address space is less well supported, but the total amount of memory the GPU can use is flexible at runtime.
The value would be in actor processes, where you can delegate inference without paying the 'copy tax' for crossing the sandbox boundary.
So, less "inference engine" and more "Tmux for AI agents"
Think pausing, moving, resuming, swapping model backend.
I scoped the post to memory architecture, since it was the least obvious part ... will follow up with one about the actor model aspect.
The whole Apple Silicon thing is (in this case) just added details that don't actually matter.
[1] https://github.com/WebAssembly/memory-control/blob/main/prop...
The difference being that modern integrated GPU are just that much faster and can run inference at tolerable speeds.
(Plus NPUs being a thing now, but that also started much earlier. Thr 10th gen Intel Core architecture already had instructions to deal with "AI" workloads... just very preliminary)
If you make use of host pointers and run on an iGPU no copy will take place.
I am pretty sure that my old 10th gen CPU/GPU combo has the ability to use the "unified"/zero-copy access mode for the GPU.
The article is talking about one particular optimization that one can implement with Apple Silicon and I at least wasn't aware that it is now possible to do so from WebAssembly - so to completely dismiss it as if it had nothing to do with Apple Silicon is imho not fair.
And yes things like the Amiga Blitter, arcade or console graphics units were already baby GPUs.
That's the same no matter the physical memory system architecture.
enhance
> no copies, no serialization, no intermediate buffers
would it kill people to write their own stuff why are we doing this. out of all the things people immediately cede to AI they cede their human ability to communicate and convey/share ideas. this timeline is bonkers.
I’ve wasted so much time looking at interesting repos this year before discovering that one of the main claims was a hallucination, or that when I got to the specific part of the codebase it just had a big note from the LLM that’s it’s a placeholder until it can figure out how to do the requested thing.
The people who have AI write their articles don’t care if it works or if it’s correct. They’re trying to get jobs and want something quick and interesting that will appeal to a lazy hiring manager. We’re just taking the bait too.
I'd build on this: The people who have AI write their articles very likely don't know how their thing works or is correct. High chance they'll stumble when they are expected to speak about whatever it is they are presenting with some authority and demonstration of knowledge. Human to human, not being able to do that = obliterates trust. Places it somewhere near the realm of misinformation, which everyone unilaterally has no interest in consuming.
Good luck to people who want to fluff expertise and present as more-capable for job prospects, the world is shit and I know there's more people who need income than there are jobs that provide for our basic human needs, but this level of AI crutching is just going to bode poorly for those who think this is going to get them where they need to go.
There will be a time where it will be problematic for those who over-rely on AI and will struggle on on-site interviews with whiteboard tests.
All other things that could be LLM-mediated have no more signal.
Some ideas to help you: ask the candidate something underspecified and watch what they do first. Do they ask clarifying questions, make their assumptions explicit? After they answer ask what would change their mind, where does that break down? Pick a topic they know and ask them to explain it to a smart non-engineer. Make them estimate something they can’t look up (forces them to decompose, bound, and calibrate). Once they’ve proposed a solution to a question, change the constraints to see if they can adapt or whether they’re stuck.
What you want to evaluate is dynamic reasoning, adaptability.
It doesn't have a great success record.
I personally would rather they exhibited expert skills in using tools, and expressing their design insight as a part of that skillset.
humans have been using tools to communicate since pre-history. language itself is one tool of communication invented to supersede body-language and grunting and noises. the thought and idea is theirs, it was communicated. Would it be that much different if they used a spellchecker extensively to edit their work?
I get why you're annoyed but is it really such a big deal? random people aren't to blame for whatever other annoyances "AI slop" has created.
Would it kill anyone at all to add a preamble that is forthcoming about using AI to write something? A chance to say these are my ideas and I've used claude to help me state it eloquently because <english is not my first language / i dont write well / claude said it better than i ever could> etc ? Not doing that, presenting as more capable/knowing than one probably is, is what destroys trust immediately the moment it's sniffed out that AI was used to write something.
It's irresponsible, a self-nerf, and it's annoying. Venn diagram there is basically a circle. We're all familiar with how vibe coding appears to weaken your ability to write code, like skipping the gym and expecting good muscle density. All I'm saying is people shouldn't be skipping the gym for literally communicating with each other because there's gonna be a lot of times in life where you're not gonna be able to whip out chat jippity to continue a real conversation with another person. Ceding that turf means you're willingly trading your ability to deal with real life scenarios with other human beings for short term gain. It's funny how the universe tends to find balance. Yeah, being well read and expressing ideas well is a skill, it takes work.
Why does it matter if it's their thought or not. If you currently care about GPU inference from webassembly on apple silicon, you can use this article. That's really about it.
Now if you care about GPU inference from wasm on apple silicon, and you found problems with this articles content, then great, comment about it. If you say that the problem with the content is due to the usual surface level slop LLMs belt out, then great complain about LLMs. But your comment didn't say anything about gpu inference from wasm on apple silicon.
That's a pretty utilitarian view of language. How would it feel if everyone spoke and wrote like a PR representative? This is what an article written by an LLM is starting to sound like.
I'm even willing to argue that the way in which you convey your ideas is as important as the idea itself. Like we could all be eating soylent for our daily nutritional requirements but we don't. The taste of the food we eat is important. It's the same with writing for me
Are they? I don't know how much they used AI, the entire article could be written from a one sentence prompt and so I'd argue that the thoughts and ideas are not their own.
This isn't like using a spell checker, it's like using a ghost writer.
Also, these folks should be amazed by 8 and 16 bit games development, or games consoles in general.
It's less about browsers, and more about server/edge/local-agent runtimes.
Wasm lets you have
- sandboxing (untrusted actor code)
- clean snapshot/restore
- portability of actor across machines
If you don’t need those properties, then yes ... native is obviously the better choice