From what I can tell, the 'Metal' offering runs on nodes with directly attached NVMe rather than network-attached storage. That means there isn't a per-customer IOPS cap – they actually market it as 'unlimited I/O' because you hit CPU before saturating the disk. The new $50 M-class clusters are essentially smaller versions of those nodes with adjustable CPU and RAM in AWS and GCP .
RE: EC2 shapes, it's not a shared EBS volume but a dedicated instance with local storage. BUT you'll still want to monitor capacity since the storage doesn't autoscale.
ALSO this pricing makes high-throughput Postgres accessible for indie projects, which is pretty neat.
rcrowley 5 days ago [-]
Correct you are.
Just want to add that you don't necessarily need to invest in fancy disk-usage monitoring as we always display it in the app and we start emailing database owners at 60% full to make sure no one misses it.
JoshGlazebrook 5 days ago [-]
> 'unlimited I/O' because you hit CPU before saturating the disk.
So in the M-10 case, wouldn't this actually be somewhat misleading as I imagine hitting "1/8 vCPU" wouldn't be difficult at all?
rcrowley 5 days ago [-]
Yes, you can certainly use up your CPU allocation on an M-10 database (at which point we offer online resizing as large as you want to go, all the way up to 192 CPUs and 1.5TiB RAM). Even still, I've been able to coax more than 10,000 IOPS from an M-10. (Actually, out of dozens of M-10s colocated on the same hardware all hammering away.)
You can get a lot more out of that CPU allocation with the fast I/O of a local NVMe drive than from the slow I/O of an EBS volume.
everfrustrated 5 days ago [-]
Doesnt "Metal" infer you get the whole box to yourself? Curious if my definitions are different to others here because I don't get what's "Metal" about sharing an instance with others.
You're still sharing nvme IO, cpu, memory bandwidth, etc. Not having a VM isn't really the point. (EDIT: and could have been done with non-metal aws instances with direct-attached nvme anyway)
rcrowley 5 days ago [-]
Within PlanetScale's product lineup, Metal refers to the use of local NVMe drives. Nothing more. These extremely affordable sizes are indeed slices of larger boxen, though no resources are overcommitted.
bsnnkv 4 days ago [-]
I also think this naming is misleading - there is a very clear association with "bare metal", which is not what is being offered here
fosterfriends 4 days ago [-]
Planetscale support has been top-notch to work with, ++. Keep up the great work y'all!
dodomodo 5 days ago [-]
It might be slightly off topic but I have a hard time understanding the layout of the website on mobile, it's not clear what is clickable and what's not.
samlambert 5 days ago [-]
Thank you for the feedback.
samlambert 5 days ago [-]
Really excited for more people to get to use Metal. Let me know if you have any questions.
whalesalad 5 days ago [-]
Why is Metal not offered for single instance deploys? Our app does not need this kind of uptime. We would be happy with a node going down once in a while (no data loss, of course) with a little bit of downtime to save 66% on the cost of running 2 additional nodes that will never see action.
samlambert 5 days ago [-]
It's a durability thing, we need to make sure writes are replicated off to at least one node. There might be avenues to get Metal down to single node in the future.
solatic 5 days ago [-]
I definitely think there are use-cases out there which are fine with daily backups. Not every use-case requires high availability or high durability.
Even to take a case in point where durability is irrelevant - people building caches in Postgres (so as to only have one datastore / not need Redis as well). Not a big deal if the cache blows up - just force everyone to login again. Would love to see the vendor reduce complexity on their end and pass through the savings to the customer.
edit: per your other reply re. using replication to handle resizing, maybe being upfront with customers about additional latency / downtime being necessary with single-node discounts, then for resizing you could break connections, take a backup, then restore the backup on a resized node?
solatic 5 days ago [-]
Do such small caps on CPU/RAM mean that multiple customers are sharing the same server? Is there concern for noisy neighbors here, either IOPS or in case another customer's workload grows to take the full available storage on the NVMe? What kind of downtime would be needed to switch to a larger size?
rcrowley 5 days ago [-]
We've engineered in protections from noisy neighbors in both CPU and I/O usage and we do not over-commit resources.
If your or another customer's workload grows and needs to size up we launch three whole new database servers of the appropriate size (whether that's more CPU+RAM, more storage, or both), restore the most recent backups there, catch up on replication, and then orchestrate changing the primary.
Downtime when you resize typically amounts to needing to reconnect i.e. it's negligible.
taw1285 5 days ago [-]
For the less experienced devs, how should I be thinking about choosing between this vs Amazon Aurora?
mjb 5 days ago [-]
I don't think either is a bad choice, but Aurora has some advantages if you're not a DB expert. Starting with Aurora Serverless:
- Aurora storage scales with your needs, meaning that you don't need to worry about running out of space as your data grows.
- Aurora will auto-scale CPU and memory based on the needs of your application, within the bounds you set. It does this without any downtime, or even dropping connections. You don't have to worry about choosing the right CPU and memory up-front, and for most applications you can simply adjust your limits as you go. This is great for applications that are growing over time, or for applications with daily or weekly cycles of usage.
The other Aurora option is Aurora DSQL. The advantages of picking DSQL are:
- A generous free tier to get you going with development.
- Scale-to-zero and scale-up, on storage, CPU, and memory. If you aren't sending any traffic to your database it costs you nothing (except storage), and you can scale up to millions of transactions per second with no changes.
- No infrastructure to configure or manage, no updates, no thinking about replicas, etc. You don't have to understand CPU or memory ratios, think about software versions, think about primaries and secondaries, or any of that stuff. High availability, scaling of reads and writes, patching, etc is all built-in.
It will be faster and a lot easier to use than Aurora.
wessorh 5 days ago [-]
I haven't read HN for a while, this appears to just be an advertisement, did the rules change and advertisements for new products are promoted like product placement in movies?
asking for a friend that liked this space
ksec 4 days ago [-]
It is a product / feature announcement. Much like blog post talking about their products or AWS announcing new features at their summit. Apple Announcing new MacBook Pro.
4 days ago [-]
wackget 5 days ago [-]
If anyone from Planetscale is reading this, please know I hate what you did to your website. I previously had it bookmarked as an example of excellent, usable website design. About a year ago it turned into a plaintext nightmare. The first time I saw the new design I genuinely thought that a CSS file had failed to load in my browser. It's awful.
*Edit:* It also fails to load other pages if you have JavaScript or XHR disabled.
HatchedLake721 5 days ago [-]
Same. Love PlanetScale, love their previous website design. I struggle reading white text on black backgrounds, so I don't even try to read their product pages or blog posts since there's no light mode :( yes I know about reader mode
It feels it went from "professional Stripe level design that you admire and it inspires you" to just "hard to read black website", not sure what for.
There's definitely a light mode for planetscale.com (the docs, the blog, the changelog, and the UI). Should work on both desktop and mobile. Make sure your browser is requesting light mode. The browser doesn't always follow your OS-level preferences.
HatchedLake721 4 days ago [-]
My OS is in dark mode, I usually manually switch websites to light mode, but planetscale.com (except docs) doesn't have the switch.
samdoesnothing 4 days ago [-]
Design is subjective of course. I love their new website and much prefer it to the old one.
mesmertech 5 days ago [-]
Was curious what it looks like now, and yea, not a fan of the fake hacker "we don't do CSS or styling". But then again maybe I was just used to their old design
heliumtera 5 days ago [-]
Can you provide an example of a website you approve?
anoojb 5 days ago [-]
Perhaps a naive question — but why would someone use a dedicated database provider and connect from another cloud provider's application service? ...as opposed to using the same provider's db + app service offering?
Wouldn't this introduce additional latency among other issues?
wrs 5 days ago [-]
I had the same latency concerns when I heard about this PaaS DB trend, but you’ll note that this runs in the AWS (soon GCP) region of your choice, so if you’re hosted there, it should be about the same latency as using their managed DB service.
If you aren’t hosting the app in the same AWS/GCP region then I still have the same question.
lab14 4 days ago [-]
> so if you’re hosted there, it should be about the same latency as using their managed DB service.
yes and no. In my AWS account I can explicitly pick an AZ (us-east-2a, us-east-2b or us-east-2c) but Availability Zones are not consistent between AWS accounts.
But that's exactly why they introduced the AZ IDs (use1-az1 as opposed to us-east-1a), so you can tell whether you're really in the same zone, regardless of the name you see in a particular account.
lab14 4 days ago [-]
Ah, thanks Internet stranger. TIL.
rcrowley 5 days ago [-]
PlanetScale operates databases in AWS and GCP. There's no network latency penalty for choosing PlanetScale if you're hosting your app in one of those cloud providers (and in one of the many regions we operate in).
ShakataGaNai 5 days ago [-]
More importantly, no bandwidth charge penalty. As leaving AWS isn't inexpensive.
FancyFane 5 days ago [-]
From the PlanetScale perspective keep in mind the ability to shard. What happens when the largest single node Aurora instance can no longer keep up with application/traffic demands?
I ask because we see it more often than not, and for that situation sharding the workflow is the best answer. Why have one MySQL instance responding to request when you could have 2,4,8...128, etc MySQL instances responding as a single database instance? They also have the ability to vertically scale each of the shards in that database as it's needed.
carlm42 5 days ago [-]
It depends a bit on your cloud provider but some of them have an offering that doesn't always match your needs or their pricing might be much more expensive at equal performance.
5 days ago [-]
croemer 4 days ago [-]
$60/TB of egress is quite a lot
buremba 4 days ago [-]
but at least you get a fraction of CPU and 1GB memory.
buster 5 days ago [-]
Sounds amazing, but i would rather be able to run the database locally and use the same in dev as in production. Is this possible?
rcrowley 5 days ago [-]
PlanetScale's Postgres offering is as close to plain-old-Postgres as we could possibly build.
So what happens if you get a nvme failure? Is there automatic failover and restore?
How does cross data center nodes work?
rcrowley 5 days ago [-]
These are all three-node clusters with PlanetScale's management handling backup, restore, failover, and replication.
ngalstyan4 5 days ago [-]
Sounds cool!
Would be curious to know what the underlying aws ec2 instance is.
Is each DB on a dedicated instance?
If not, are there per-customer iops bounds?
rcrowley 5 days ago [-]
We run on the same instance types the larger PlanetScale Metal sizes offer as whole instances. For Intel that's r6id, i4i, i7i, i3en, and i7ie. For ARM that's r8gd, i8g, and i8ge. (Right now, at least. AWS is always cookin' up new instance types.) Same story will soon be true for GCP.
samlambert 5 days ago [-]
there aren't per customer IOPs limits but the CPU will be the bottleneck.
orphea 5 days ago [-]
> $50
Looks like US only. Choosing Europe is +$10, Australia is +$20.
bigTMZfan 5 days ago [-]
Will these smaller instances be offered for Vitess / MySQL compatible users?
rcrowley 5 days ago [-]
Soon!
boundlessdreamz 5 days ago [-]
Off-topic: when will postgres 18 be offered on metal?
kelp 4 days ago [-]
You can expect to see it in early 2026 for both Metal and EBS backed databases.
kelp 3 days ago [-]
In typical PlanetScale style, we like to beat our estimates.
This will be faster than an equivalent RDS instance and will handle more of the operational lifecycle around failover and high-availability with less downtime than RDS.
5 days ago [-]
unbelievably 5 days ago [-]
$50 bucks gets you an EIGHTH of a vCPU, 1GB RAM, and 10GB SSD??? This is quite frankly highway robbery. Not to mention the laughable bandwidth. Hetzner will give you 16 vCPU, 32GB RAM, and 640GB SSD for less than that. We're talking over an order of magnitude difference in value here.
everfrustrated 5 days ago [-]
You're not paying for the infra, you're paying for not having to hire people who would have had to build/manage/test/operate/secure it.
carlm42 5 days ago [-]
On Hetzner you will be on the hook for managing the database though, and DBA is most certainly a full-time job if you have a serious use-case for it.
dig1 5 days ago [-]
1 GB of RAM for Postgres is really only useful for tinkering IMHO. Even for development, you’ll quickly need more memory, so HA doesn’t provide much value here. If you go with something even remotely reasonable (4 GB RAM, 200 GB SSD, 1/2 vCPU — and that’s still on the low end), the cost jumps to about $290/month. For that price, you could easily hire someone to set up HA Postgres for you on Hetzner or OVH and once configured, HA Postgres typically requires minimal ongoing maintenance.
Also, this is a shared server, not a truly dedicated one like you’d get with bare-metal providers. So, calling it "Metal" might be misleading marketing trick, but if you want someone to always blame and don’t mind overpaying for that comfort, then the managed option might be the right thing.
unbelievably 5 days ago [-]
Considering they are charging an unfathomable $4529/mo for 256 GB databases, extrapolating that to a serious use case you can indeed just hire someone full-time with how much you'd save. And then you'll actually have someone who understands how databases work instead of treating it like an expensive black box.
edit: my bad that's the price for 256GB RAM.
carlm42 5 days ago [-]
Yeah per your edit that'd be for 256GB RAM which puts that into serious dollar category. For comparison I checked what AWS asks for for the same spec and that'd be $4616/month (for a db.m8gd.16xlarge), and that doesn't even yield you an actual NVMe. You can of course build the same for cheaper on Hetzner but again then you're on the hook also for the operations of the thing, which at that size is possibly non-trivial.
tempest_ 5 days ago [-]
Cloud databases have been pricey for a while.
The reality most databases are tiny as shit and most apps can tolerate the massive latency that the cloud provider dbs offer.
It is why it is sorta funny we are rediscovering non network attached storage is faster.
solatic 5 days ago [-]
> $4529/month... can indeed just hire someone full-time
That's $54,348/year, not including the cost of benefits, not including stock compensation. Let's say you reserve 20% for benefits and that comes out to $43,478.40 in salary.
Besides the benefit of not needing the management / communication overhead of hiring somebody, do you know any DBAs willing to take a full-time job for $43,478.40 in salary?
unbelievably 5 days ago [-]
Missed the 'extrapolating' part -- for 3x that, absolutely.
solatic 4 days ago [-]
But that's the point, innit? How many SMEs need multiple production databases of that size? Nobody's really suggesting that Fortune 500 size enterprises should get by without DBAs. There's a big difference between an enterprise paying for a DBA take care of fleets of production databases, compared to a <50 employee shop that should do just fine with a single production database.
krawcu 4 days ago [-]
I think this product is mostly only viable in NA market where the SDE wage is much higher than European one to justify spending $x/mo for DBaaS instead of hosting their own
cheema33 4 days ago [-]
> $50 bucks gets you an EIGHTH of a vCPU, 1GB RAM, and 10GB SSD???
Apparently there are people who find this offering compelling. The lack of value is quite stunning to me.
skeptrune 5 days ago [-]
Let's gooooo! Incredible deal for indiehackers.
Rendered at 06:32:45 GMT+0000 (Coordinated Universal Time) with Vercel.
From what I can tell, the 'Metal' offering runs on nodes with directly attached NVMe rather than network-attached storage. That means there isn't a per-customer IOPS cap – they actually market it as 'unlimited I/O' because you hit CPU before saturating the disk. The new $50 M-class clusters are essentially smaller versions of those nodes with adjustable CPU and RAM in AWS and GCP .
RE: EC2 shapes, it's not a shared EBS volume but a dedicated instance with local storage. BUT you'll still want to monitor capacity since the storage doesn't autoscale.
ALSO this pricing makes high-throughput Postgres accessible for indie projects, which is pretty neat.
Just want to add that you don't necessarily need to invest in fancy disk-usage monitoring as we always display it in the app and we start emailing database owners at 60% full to make sure no one misses it.
So in the M-10 case, wouldn't this actually be somewhat misleading as I imagine hitting "1/8 vCPU" wouldn't be difficult at all?
You can get a lot more out of that CPU allocation with the fast I/O of a local NVMe drive than from the slow I/O of an EBS volume.
You're still sharing nvme IO, cpu, memory bandwidth, etc. Not having a VM isn't really the point. (EDIT: and could have been done with non-metal aws instances with direct-attached nvme anyway)
Even to take a case in point where durability is irrelevant - people building caches in Postgres (so as to only have one datastore / not need Redis as well). Not a big deal if the cache blows up - just force everyone to login again. Would love to see the vendor reduce complexity on their end and pass through the savings to the customer.
edit: per your other reply re. using replication to handle resizing, maybe being upfront with customers about additional latency / downtime being necessary with single-node discounts, then for resizing you could break connections, take a backup, then restore the backup on a resized node?
If your or another customer's workload grows and needs to size up we launch three whole new database servers of the appropriate size (whether that's more CPU+RAM, more storage, or both), restore the most recent backups there, catch up on replication, and then orchestrate changing the primary.
Downtime when you resize typically amounts to needing to reconnect i.e. it's negligible.
- Aurora storage scales with your needs, meaning that you don't need to worry about running out of space as your data grows. - Aurora will auto-scale CPU and memory based on the needs of your application, within the bounds you set. It does this without any downtime, or even dropping connections. You don't have to worry about choosing the right CPU and memory up-front, and for most applications you can simply adjust your limits as you go. This is great for applications that are growing over time, or for applications with daily or weekly cycles of usage.
The other Aurora option is Aurora DSQL. The advantages of picking DSQL are:
- A generous free tier to get you going with development. - Scale-to-zero and scale-up, on storage, CPU, and memory. If you aren't sending any traffic to your database it costs you nothing (except storage), and you can scale up to millions of transactions per second with no changes. - No infrastructure to configure or manage, no updates, no thinking about replicas, etc. You don't have to understand CPU or memory ratios, think about software versions, think about primaries and secondaries, or any of that stuff. High availability, scaling of reads and writes, patching, etc is all built-in.
asking for a friend that liked this space
*Edit:* It also fails to load other pages if you have JavaScript or XHR disabled.
It feels it went from "professional Stripe level design that you admire and it inspires you" to just "hard to read black website", not sure what for.
(not fully functional) https://web.archive.org/web/20240811142248/https://planetsca...
Wouldn't this introduce additional latency among other issues?
If you aren’t hosting the app in the same AWS/GCP region then I still have the same question.
yes and no. In my AWS account I can explicitly pick an AZ (us-east-2a, us-east-2b or us-east-2c) but Availability Zones are not consistent between AWS accounts.
See https://docs.aws.amazon.com/ram/latest/userguide/working-wit...
I ask because we see it more often than not, and for that situation sharding the workflow is the best answer. Why have one MySQL instance responding to request when you could have 2,4,8...128, etc MySQL instances responding as a single database instance? They also have the ability to vertically scale each of the shards in that database as it's needed.
How does cross data center nodes work?
Would be curious to know what the underlying aws ec2 instance is.
Is each DB on a dedicated instance?
If not, are there per-customer iops bounds?
https://planetscale.com/blog/postgres-18-is-now-available
Also, this is a shared server, not a truly dedicated one like you’d get with bare-metal providers. So, calling it "Metal" might be misleading marketing trick, but if you want someone to always blame and don’t mind overpaying for that comfort, then the managed option might be the right thing.
edit: my bad that's the price for 256GB RAM.
The reality most databases are tiny as shit and most apps can tolerate the massive latency that the cloud provider dbs offer.
It is why it is sorta funny we are rediscovering non network attached storage is faster.
That's $54,348/year, not including the cost of benefits, not including stock compensation. Let's say you reserve 20% for benefits and that comes out to $43,478.40 in salary.
Besides the benefit of not needing the management / communication overhead of hiring somebody, do you know any DBAs willing to take a full-time job for $43,478.40 in salary?
Apparently there are people who find this offering compelling. The lack of value is quite stunning to me.