I think the article is wrong in its core premise. While the electrons get added or removed from the floating gate, the total number of electrons in the SSD chip stays the same. Gates are capacitors, in order to add electrons to one capacitor plate, you have to remove an equal numbers of electrons from the other plate, i. e. from the transistor channel. The net charge of a SSD chip is always zero. Otherwise it would just go bang. <s>2.43×10^-15</s> [my bad 1] 2.67×10^15 electrons is about 300µC - that's a lot of charge to separate macroscopically.
Therefore the mass (weight is a different thing, through it is proportional to mass at a given constant gravity potential) of the data on a SSD isn't fundamentally different from a HDD - they both are caused by a change of internal energy without any change in the number of fermions. I'd expect data on SSD to have larger mass change because a charged capacitor always store more energy than a discharged one, while energy of magnetic domains is less directional and depends mostly on the state of neighbor domains - but I'm not sure about this part.
[1] Thanks stackghost.
nickcw 1 days ago [-]
> So, assuming the source material is correct and electrons indeed have mass, SSDs do get heavier with more data.
That is definitely wrong! No way the source material has more electrons. The only way it could do that is by being charged.
Richard Feynman, The Feynman Lectures:
"If you were standing at arm's length from someone and each of you had one percent more electrons than protons, the repelling force would be incredible. How great? Enough to lift the Empire State Building? No! To lift Mount Everest? No! The repulsion would be enough to lift a "weight" equal to that of the entire earth!"
See, now, if this was Reddit...this is the opportunity for a yo momma joke. But here we are on HN, so I'll just point out that this is the opportunity for a yo momma joke.
throwup238 20 hours ago [-]
Yo mamma is so fat she broke the Coulomb barrier?
Kiboneu 19 hours ago [-]
Exactly. On top of that, most managed flash (which is equivalent to SSD controllers) will pass all write through a modified cyclic XOR pad in order to keep the /bit/ entropy high. I don’t think the article holds on multiple abstraction layers.
LorenPechtel 4 hours ago [-]
Which is the same reason storing data to a HDD doesn't add weight. You can pack the data tighter if you are writing basically balanced 1s and 0s. Thus you can pack more bytes into a given area by encoding them into patterns with even distributions even though that means you need to write more bits.
tliltocatl 4 hours ago [-]
But SSD erasing must write a constant (either one or zero). So an erased ready-to-write SSD block will have consistently different energy than one written with a random scrambled pattern. Same for SMR HDDs - but not for CMR.
stackghost 1 days ago [-]
>2.43×10^-15 electrons
I believe TFA reads 2.43×10^-15 kg, not electrons. Unless SSDs are creating new and exciting physics, one can't have less than one electron, as it's an elementary particle.
karmakaze 1 days ago [-]
Well you could have a virtual particle whose mass could be time-averaged.
jmalicki 1 days ago [-]
Neutrinos weight far less than electrons (but while NAND flash involves super weird physics it's not that weird)
stackghost 1 days ago [-]
They do weigh far less, but a quantity of "10^-15 electrons" is still impossible.
I think my favorite part of that comment is "documenting" that 10^(-15) is not negative by appealing to Wolfram Alpha.
agentdrek 21 hours ago [-]
your user name is found at the 4,922,096,564th digit of Pi
stackghost 19 hours ago [-]
Yes, you're correct. Now ask yourself if "one quadrillionth of an electron" is a quantity that's possible to have.
ChrisClark 22 hours ago [-]
Good thing he didn't say that
_alternator_ 20 hours ago [-]
Another bit I’m surprised seems to have gotten completely glossed over: there is a deep relationship between _entropy_ and mass which puts bounds on the amount of information you can place in a given volume.
TLDR: a given region of space can’t have more entropy than a black hole of the same volume. Rearranging terms, you find that N bits of information (for large N) has an equivalent black hole size, which in turn has a mass…
ajross 21 hours ago [-]
> energy of magnetic domains is less directional and depends mostly on the state of neighbor domains
Yes, but it's the same thing. The flux changes on the drive define the bits. It's probably true that a drive storing all 1's or all 0's would be quantitatively (but surely immesurably) lighter. But in practice a drive storing properly compressed high-entropy data is going to see a flux change every other bit on average. And all of those are regions of high magnetic field with calculable energy density. Same deal as charge in a capacitor, which also stores energy in the field.
zahlman 1 days ago [-]
TFA started out seeming well enough written but definitely turned LLM-padded in the middle. And yeah, I think you're right about the actual science.
nwellnhof 1 days ago [-]
Reminds me of an old April Fools' prank in German c't magazine. They offered a defragmentation-like tool for HDDs that claimed to distribute 0s and 1s more evenly on the drive to make it run more smoothly and extend its lifespan.
jerf 1 days ago [-]
Amusingly, that's unnecessary, but possibly not for the reason most people think. It's not because the hard drive hardware is oblivious to runs of 0s and 1s exactly... it's because it's actually so sensitive that it already is recording the data in an encoding that doesn't allow for long runs of 0s and 1s. You can store a big file full of zeros on your disk and the physical representation will be about 50/50 ones and zeros on the actual storage substrate already. Nothing you do at the "data" layer can even create large runs of 0s or 1s on the physical layer in the first place. See https://www.datarecoveryunion.com/data-encoding-schemes/
b112 20 hours ago [-]
Well it depends when it was claimed.
I imagine MFM drives from 1985 might be a bit different from drives that are billions of times more data dense today. Back then, the drive didn't even control track width, the controller card did. And it was exposed to the OS.
I remember turning my "20MB", yes MB drive into a 30MB drive by messing with the track width. Of course, this was the time when people had Commodore 64 300baud modems, and would overclock them and get 450baud out of them.
In my computer club, we wrote a little piece of software to see which of us could get the highest bandwidth on a modem, one was even capable of just over 500baud!
After ranking, we all agreed to "trade down", so the guy with the fastest modem swapped his with the owner of the local Punter BBS. Everyone else traded so we still had the same ranking. That way, the BBS would always be able to support everyone at max speed, and everyone would still be "lucky" in terms of "next fastest modem".
I can't imagine that happening today.
jerf 9 hours ago [-]
You could imagine what MFM drives were like, or you could read about it, in the link I gave.
b112 6 hours ago [-]
I did read, but so ingrained is calling the "controller" MFM, that I literally thought it was referencing the standard, which I think was ST-506 (this was in 1983, so the timing seems to be right?).
EG, I literally thought of the controller and encoding as differing things, both separately called MFM. Ah well, it only took 40 years to discover differently.
Thanks for the link.
foobiekr 1 days ago [-]
this principle applies to a lot of things. signaling for example. optical links. oldschool optical links (OC48 timeframe) did not feature scramblers and so a malicious packet could on occasion cause them to de-train and go out of sync since it's extended loss of light.
long since fixed but a common problem.
userbinator 22 hours ago [-]
High-density NAND flash also needs "whitening", i.e. scrambling the data to be stored so that the number of 1s and 0s is even and randomly distributed, to avoid wearing some cells (the ones that are storing 0s) more than others, as well as reduce pattern-dependent disturb errors.
The self-synchronizing scrambler of 10GBase-SR and its relatives is a beautiful piece of engineering.
Interestingly, I heard that entrenched telco people were pushing for a much more complicated, SONET-ish approach. But classic Ethernet simplicity carried the day, and it's really nice...
1 days ago [-]
omgbear 1 days ago [-]
I think my network card does that.
SanjayMehta 23 hours ago [-]
Elector magazine used to prank too; my favourite one was their "solar powered pocket torch." It wasn't rechargeable.
SuperscalarMeme 24 hours ago [-]
Okay I think I can clarify this:
Electrons trapped in the gate (when storing a 0) come from the substrate. The substrate is connected to ground, and the “lost” electrons are replenished. So yes, net chip weight grows when 0s are written.
However, weight relative to what? All 0s on a chip will be heavier (the heaviest). All 1s would be the lightest. 50/50 1s and 0s would be the middle, which is where I’d expect generic “data” to fall.
userbinator 21 hours ago [-]
The insulating oxide layers prevent the electrons from leaking out quickly, allowing data to persist for 10+ years under normal conditions.
But what about the magnetic properties of SSDs? Any additive alignment for data?
Or the opposite, magnetic aligned fields for all 1’s or all 0’s?
Negligible now, but critically important effects to understand before we build a planet sized drive and wipe it!
Also, a planet sized drive will need to explicitly maintain large reserves of electrons. In theory, enough for an all ones (or zeros) state.
But that could be handled by tiling areas of one’s=high and zero’s=high. With tile charge flipping to maintain a balance in electron needs, locally and globally.
nritchie 22 hours ago [-]
An encrypyted drive is likely to have (close to) equal numbers of 0's and 1's full or empty so any of these arguments are moot.
theandrewbailey 22 hours ago [-]
If the drive isn't encrypted, is it possible that controllers use some kind of encoding to balance out the number of bits, so that there's not a long run of 0s or 1s?
userbinator 20 hours ago [-]
Yes, this is necessary for high density NAND flash and is referred to as "whitening" or "scrambling". Not needed at all for SLC or older MLC.
epx 1 days ago [-]
Was expecting Boltzmann and entropy to be involved at some point :(
Time to replace "I'm zero surprised" with "That's a zero Shannon event"
nvader 22 hours ago [-]
I'm Mega-Shannoned! Mega-Shannoned, I tell you, to learn that gambling is going on here.
bilsbie 1 days ago [-]
Could you spin an SSD on a string really fast and load data when it’s on one side and delete it on the other and create forward motion?
Massless propulsion??
tlb 1 days ago [-]
The rate at which molecules of plastic sublimate off the surface of the enclosure is probably a much larger amount of mass. The rate increases with e^kT, where k is such that it doubles about every 10 degrees C. So if you get a drive and fill it with data (which warms it up significantly) the lost casing material will dominate the mass balance.
userbinator 20 hours ago [-]
The rate at which particles of dust settle on the surface of the enclosure is even higher.
TurdF3rguson 1 days ago [-]
I guess it's because the 1s weigh more than 0s? Which is counterintuitive because the 0s are chubbier.
CalChris 23 hours ago [-]
E = mc2 so m = E / c2
c is a really big number.
c2 is a really really big number.
E is small.
m is really really small.
Twey 20 hours ago [-]
Given the existence of Szilard's engine showing that information can be converted to energy, can we not conclude that any system storing information has potential energy and therefore mass?
alanh 1 days ago [-]
"Data has weight, but only on SSDs" - Not just SSDs! Unless you always hang the chad, surely writing data onto punchcards reduces the weight of that 'storage medium'!
erg0s4m 1 days ago [-]
So it has negative weight in this case.
TheOtherHobbes 1 days ago [-]
Negative mass = negative energy, so we should be able to make an Alcubierre Drive out of punched cards.
tavavex 24 hours ago [-]
If some PMs today evaluate performance by the number of lines of code, I wonder if the punch card equivalent was weighing the punched-out holes that were removed by each developer.
mannyv 7 hours ago [-]
Does information weigh anything?
TazeTSchnitzel 1 days ago [-]
Lights in video games are real, but only if you're using an OLED or CRT.
mycall 17 hours ago [-]
You can do binary by etch glass and the more data you have, the less weight it has. Negative space is quite useful.
slicktux 1 days ago [-]
Classic Cunningham’s Law… post the wrong answer and you’ll get the correct one. Then the comments can be used by LLM to output the correct answer!
inetknght 20 hours ago [-]
Data has negative weight on optical media. The data gets burned off of it!
rngfnby 22 hours ago [-]
All data changes mass of the medium:
Every data storage media requires some work be done to it.
E=m^2
All data storage media has mass.
QED
LorenPechtel 4 hours ago [-]
Work being done to it will show up as heat, which indeed is subject to e=mc^2. But when it cools there's no residual mass.
The data doesn't actually have weight because they aren't going to store a 1 or a 0, but rather do something like store 01 vs 10.
thayne 18 hours ago [-]
Just because work is done doesn't mean the energy is stored. It could just result dissipate as heat.
As a counter-example, consider etching data in some form into another material, say stone or metal. You do work to remove the material you etch away, but you are removing material, so the final mass is actually less than you started.
That said, I believe most digital storage uses a high energy and low energy state to store 0 and 1, and in that case the high energy state will have (very, very slightly) more mass than the low energy state. But even then, having all bits in the high energy state would be the "heaviest", but would effectively have no data.
rngfnby 6 hours ago [-]
"All data changes mass of the medium"
The first line
ANarrativeApe 21 hours ago [-]
This also applies, on a larger scale, when one adds data to a medium like a sheet of paper, the graphite or ink adds to the mass of the storage medium.
But does this constitute data?
The maximum mass would be achieved by covering the entire sheet with graphite/ink which, it could be argued is not data (unless you consider it to be a binary cell in a larger byte of data).
I don't know the physics of thermal paper, but I suspect that it might be the opposite.
My point?
This is not evidence that data has mass, it is evidence that transcribing data onto a storage medium may change the mass of the storage medium, and that change maybe positive or negative.
Perhaps I should have this carved on my tomb stone...
antimatter15 1 days ago [-]
Another fun calculation is that due to special relativity, a hard drive that is spinning gains a certain amount of mass due to the rotational kinetic energy and E=mc^2.
Assuming the platter is 100g, 42mm, spinning at 7200RPM, there is about 25J of rotational kinetic energy, whose mass equivalent is 2.8x10^-13g (0.28 femtograms).
Assuming 200 electrons per NAND floating gate with 3bits/cell TLC on a 2TB SSD, there would be 5.3x10^14 electrons, weighing about 0.5 femtograms.
metalman 13 hours ago [-]
All data storage and retrieval uses energy, which has some mass equivilant, but radiates as heat, magnetism, light, and electricity, which much produce molecular changes in the data medium, and surounding structures.
Figuring out the net energy/weight balance for each possible data use is going to be way out there on the thinest limbs of conjecture. Increasing the degass rate or polymerising the surface of x, x¹, x²...
Data does have real weight. In one of my early assignments my firmware was too large to fit on one EPROM. Naively I thought the hardware team could just add another EPROM to the board. Turns out while they had left place for another device, it would have exceeded the payload budget by a few grams. Had to go back and reduce the code by a few hundred bytes.
1970-01-01 1 days ago [-]
Now do fiber and tell me the relativistic mass of my router so my ISP can charge me an overweight fee.
jmclnx 1 days ago [-]
interesting, I wonder if one can translate this into the amount of data on the drive ? Maybe it does not matter unless one cleared the drive using dd(1).
Also would trimming cause a different value even though the data size remains the same ? I would think so, assuming I understand trim.
STARGA 1 days ago [-]
[dead]
Rendered at 23:13:37 GMT+0000 (Coordinated Universal Time) with Vercel.
Therefore the mass (weight is a different thing, through it is proportional to mass at a given constant gravity potential) of the data on a SSD isn't fundamentally different from a HDD - they both are caused by a change of internal energy without any change in the number of fermions. I'd expect data on SSD to have larger mass change because a charged capacitor always store more energy than a discharged one, while energy of magnetic domains is less directional and depends mostly on the state of neighbor domains - but I'm not sure about this part.
[1] Thanks stackghost.
That is definitely wrong! No way the source material has more electrons. The only way it could do that is by being charged.
Richard Feynman, The Feynman Lectures: "If you were standing at arm's length from someone and each of you had one percent more electrons than protons, the repelling force would be incredible. How great? Enough to lift the Empire State Building? No! To lift Mount Everest? No! The repulsion would be enough to lift a "weight" equal to that of the entire earth!"
From: https://tycho.parkland.edu/cc/parkland/phy142/summer/lecture...
I believe TFA reads 2.43×10^-15 kg, not electrons. Unless SSDs are creating new and exciting physics, one can't have less than one electron, as it's an elementary particle.
TLDR: a given region of space can’t have more entropy than a black hole of the same volume. Rearranging terms, you find that N bits of information (for large N) has an equivalent black hole size, which in turn has a mass…
Yes, but it's the same thing. The flux changes on the drive define the bits. It's probably true that a drive storing all 1's or all 0's would be quantitatively (but surely immesurably) lighter. But in practice a drive storing properly compressed high-entropy data is going to see a flux change every other bit on average. And all of those are regions of high magnetic field with calculable energy density. Same deal as charge in a capacitor, which also stores energy in the field.
I imagine MFM drives from 1985 might be a bit different from drives that are billions of times more data dense today. Back then, the drive didn't even control track width, the controller card did. And it was exposed to the OS.
I remember turning my "20MB", yes MB drive into a 30MB drive by messing with the track width. Of course, this was the time when people had Commodore 64 300baud modems, and would overclock them and get 450baud out of them.
In my computer club, we wrote a little piece of software to see which of us could get the highest bandwidth on a modem, one was even capable of just over 500baud!
After ranking, we all agreed to "trade down", so the guy with the fastest modem swapped his with the owner of the local Punter BBS. Everyone else traded so we still had the same ranking. That way, the BBS would always be able to support everyone at max speed, and everyone would still be "lucky" in terms of "next fastest modem".
I can't imagine that happening today.
EG, I literally thought of the controller and encoding as differing things, both separately called MFM. Ah well, it only took 40 years to discover differently.
Thanks for the link.
long since fixed but a common problem.
That said, digital storage media has been somewhat pattern-sensitive for a century or more: https://en.wikipedia.org/wiki/Lace_card
Interestingly, I heard that entrenched telco people were pushing for a much more complicated, SONET-ish approach. But classic Ethernet simplicity carried the day, and it's really nice...
However, weight relative to what? All 0s on a chip will be heavier (the heaviest). All 1s would be the lightest. 50/50 1s and 0s would be the middle, which is where I’d expect generic “data” to fall.
In SLC flash, 10+ years is normal. Modern QLC is far more volatile: https://news.ycombinator.com/item?id=43739028
https://www.eejournal.com/fresh_bytes/how-do-you-weigh-a-pro...
Or the opposite, magnetic aligned fields for all 1’s or all 0’s?
Negligible now, but critically important effects to understand before we build a planet sized drive and wipe it!
Also, a planet sized drive will need to explicitly maintain large reserves of electrons. In theory, enough for an all ones (or zeros) state.
But that could be handled by tiling areas of one’s=high and zero’s=high. With tile charge flipping to maintain a balance in electron needs, locally and globally.
Time to replace "I'm zero surprised" with "That's a zero Shannon event"
Massless propulsion??
c is a really big number. c2 is a really really big number. E is small.
m is really really small.
Every data storage media requires some work be done to it.
E=m^2
All data storage media has mass.
QED
The data doesn't actually have weight because they aren't going to store a 1 or a 0, but rather do something like store 01 vs 10.
As a counter-example, consider etching data in some form into another material, say stone or metal. You do work to remove the material you etch away, but you are removing material, so the final mass is actually less than you started.
That said, I believe most digital storage uses a high energy and low energy state to store 0 and 1, and in that case the high energy state will have (very, very slightly) more mass than the low energy state. But even then, having all bits in the high energy state would be the "heaviest", but would effectively have no data.
The first line
Perhaps I should have this carved on my tomb stone...
Assuming the platter is 100g, 42mm, spinning at 7200RPM, there is about 25J of rotational kinetic energy, whose mass equivalent is 2.8x10^-13g (0.28 femtograms).
Assuming 200 electrons per NAND floating gate with 3bits/cell TLC on a 2TB SSD, there would be 5.3x10^14 electrons, weighing about 0.5 femtograms.
https://en.wikipedia.org/wiki/Thermodynamic_beta
Also would trimming cause a different value even though the data size remains the same ? I would think so, assuming I understand trim.