While this is cool and I dig it, I'm really, really thankful for maintenance windows at the current job. In the real world, 99.9% of systems aren't used 24/7/365. Just do the cutoff when everyone is asleep. Then restart everything to be sure.
embedding-shape 22 hours ago [-]
> In the real world, 99.9% of systems aren't used 24/7/365. Just do the cutoff when everyone is asleep
"Real world" being something that covers max what, 10 hours of a day? What about things that are used by the entire world? I think there is more than you realize of those sort of services underpinning the entire internet and the web, serving a global user base.
mystifyingpoi 3 hours ago [-]
> What about things that are used by the entire world?
Well, for the remaining 0.1% - go ahead and use the fancy hot replication thingy. Sometimes there is no choice, and that's fine. Although that might mean, that the system architecture is busted.
MagicMoonlight 21 hours ago [-]
Almost nothing in the world is used globally. You have a handful of things like YouTube and Facebook and the visa network.
Nobody is using slopwork’s new CrudX at a global scale.
sanswork 13 hours ago [-]
Basically every large multinational corporation will have a bunch of systems that are used globally. Most advertising companies work on global traffic patterns.
citrin_ru 11 hours ago [-]
A large multinational corporation can go a long way by splitting they IT infra into multiple regions and doing maintenance in different regions at different time.
mystifyingpoi 3 hours ago [-]
Exactly, that's how you do it. Having one system for the whole world is risky.
aloha2436 18 hours ago [-]
The Visa network is the frontend to a truly staggering number of issuers who also want to maintain a similar level of uptime to support their cardholders wherever they are in the world.
lll-o-lll 16 hours ago [-]
>> Almost nothing in the world is used globally.
??? I’ve worked in this software game for over 20 years. I’m yet to experience this “no need to worry about the globe”. I think you have the fallacy of thinking local experience is general experience.
There is a very large amount of b2b software out there that is serving multi-nationals of all types. Perhaps it is surprising, but there’s a large number of software solutions that aren’t that big, but still have customers in all the 4 corners.
ayuhito 19 hours ago [-]
> Just do the cutoff when everyone is asleep.
In this age, many smaller companies serve customers across the globe. There is no common “asleep”.
Thaxll 1 days ago [-]
We need more details on 6. This is the hard part, like you swap connection from A to B, but if B is not synced properly and you write to it then you start having diff between the two and there is no way back.
Like B is slightly out of date ( replication wise ) the service modify something, then A comes with change that modify the same data that you just wrote.
How do you ensure that B is up to date without stopping write to A ( no downtime ).
mattlord 23 hours ago [-]
It's open source. You can get as many details as you like :)
Not sure how they do it, but I would do it like so:
Have old database be master. Let new be a slave. Load in latest db dump, may take as long as it wants.
Then start replication and catch up on the delay.
You would need, depending on the db type, a load balancer/failover manager. PgBouncer and PgPoolII come to mind, but MySQL has some as well. Let that connect to the master and slave, connect the application to the database through that layer.
Then trigger a failover. That should be it.
Snelius 19 hours ago [-]
> Load in latest db dump, may take as long as it wants.
400TB its about a week+ ?
> Then start replication and catch up on the delay.
Then u have a changes in the delay about +- 1TB. It means a changes syncing about few days more while changes still coming.
They said "current requests are buffered" which is impossible, especial for long distributed (optional) transactions which in a progress (it can spend a hours, days (for analitycs)).
Overwall this article is a BS or some super custom case which irrelevant for common systems. You can't migrate w/o downtime, it's a physical impossible.
freakynit 18 hours ago [-]
Feels the same to me as well.
"Take snapshot and begin streaming replication"... like to where? The snapshot isn't even prepared fully yet and definitely hasn't reached the target. Where are you dumping/keeping those replication logs for the time being?
Secondly, how are you managing database state changes due to realtime update queries? They are definitely going in source table at this point.
I don't get this. Im still stuck on point 1... have read it twice already.
mattlord 5 hours ago [-]
It's open source. If you want to understand exactly how, you certainly can! :-)
So you don't understand how something works. That's fine. But to then say the article and/or tech are BS is... a choice.
This work has been and is being used by some of the largest sites / apps in the world including Uber, Slack, GitHub, Square... But sure, "it's BS, super custom, and irrelevant". Gee, yer super smart! Thank you for the amazing insights. 5 stars.
mattlord 1 days ago [-]
Blog post author here. I'm happy to answer any related questions you may have.
willquack 1 days ago [-]
> you can run an initial VDiff, and then resume that one as you get closer to the cutover point.
VDiff (v2) only compares the source and destination at a specific point in time with resume only comparing rows with PK higher than the last one compared before it was paused. I assume this means:
1. VDiff doesn't catch updates to rows with PK lower than the point it was paused which could have become corrupt, and
2. VDiff doesn't continuously validate cdc changes meaning (unless you enforce extra downtime to run / resume a vdiff) you can never be 100% sure if your data is valid before SwitchTraffic
I'm curious if this is something customers even care about, or is point in time data validation sufficient enough to catch any issues that could occur during migrations?
mattlord 1 days ago [-]
You are correct about resuming. If you do an initial VDiff and then resume that same VDiff say 1 month later it would only diff rows with a higher PK value.
But there's also nothing stopping you from doing a new VDiff to cover all data at that later point in time.
willquack 10 hours ago [-]
Thanks for responding!!
I think it's still the same issue where data modified after the VDiff point in time isn't validated before SwitchTraffic. I'm mostly curious how vitess users handle this case, or if any users even care about about this case in the first place?
Is there no demand for continuous data validation similar to what TiDB offers?
Do people who care about 100% correct data validation just accept the downtime required to run a full VDiff before SwitchTraffic?
freakynit 18 hours ago [-]
"But there's also nothing stopping you from doing a new VDiff to cover all data at that later point in time." --- isn't this just pushing the same issue forward in time? How is data consistency maintained if a customer reverts back to original while having served a few request from new one already?
mattlord 5 hours ago [-]
It's open source. If you really want to know these things, I would encourage you to look at the code and read the documentation. As noted in the blog post, reverse vreplication is setup when you switch. You can switch back and forth and nothing is lost.
"isn't this just pushing the same issue forward in time?" I don't understand what you are trying to say here. You can only compare the two sides / databases at the same logical point in time. While you are doing this comparison at that point in time, the timeline continues to progress. Unless you want to stop the world and prevent writes for the full duration of the diff (which can be days or even weeks).
l5870uoo9y 1 days ago [-]
What does it cost to host a 400TB database?
freakynit 17 hours ago [-]
Enterprise grade nvme ssd's typically cost around 150$/TB. For RF of 3, this comes to around: 400 x 3 x 150: 180K USD. With a minimum of 5 year lifecycle for these enterprise SSD's, we are looking at 36K USD/year.
Going through their pricing (https://planetscale.com/pricing?engine=vitess&cluster=M-5120...), for just 15TB storage with RF=3, the pricing comes to around 24000 USD/MONTH, not year. Adjusted for 400TB and per year, this becomes 7.6 million usd. Of course, you also get a lot more, but, the difference is just insane.
Dylan16807 15 hours ago [-]
That comparison doesn't make any sense at all, and you can't excuse it by tossing out "Of course, you also get a lot more". This is like evaluating the price of wheels by buying entire cars. You wouldn't get dozens of these servers just for capacity, you'd get a custom quote.
That said at $24K you could pay off an entire server like that from Dell in 4 months despite Dell charging something stupid like $2000/TB.
freakynit 10 hours ago [-]
Lets hear your numbers then.
Dylan16807 3 hours ago [-]
Your numbers are basically fine for what you're measuring, if you round up to factor in actually having servers to put the storage drives into. So 40-50k instead of 36k.
The issue is your budget is for 400TB of data but minimal requests per second. That's a valid thing to consider, but it's extremely apples and oranges to a fleet of 75 high powered servers.
To put it a different way, their prices are pretty high but the calculation of powerful servers costing 40x as much as raw storage isn't "insane".
redwood 1 days ago [-]
That 400TB in the image is a large database! I'm guessing that's not the largest in the PlanetScale fleet either. Very impressive and a reminder that you're strongly differentiated against some of the recent database upstarts in terms of battle tested mission critical scale. Out of curiosity how many of these large clusters are using your true managed 'as a service' offering or are they mostly in the bring your own cloud mode? Do you offer zero downtime migrations from bring your own cloud to true as a service?
mattlord 1 days ago [-]
That particular cluster has grown significantly since the post was written, and yes there are now quite a few others that are challenging it for the "largest" claim. :-)
These larger ones are fully using the PlanetScale SaaS, but they are using Managed -- meaning that there are resources dedicated to and owned by them. You can read more about that here: https://planetscale.com/docs/vitess/managed
Understood: that's great for your customers' EDP negotiations with their cloud providers!
WaitWaitWha 1 days ago [-]
I split step 4 in their "high level, this is the general flow for data migrations".
4.0 Freeze old system
4.1 Cut over application traffic to the new system.
4.2 merge any diff that happened between snapshot 1. and cutover 4.1
4.3 go live
to me, the above reduces the pressure on downtime because the merge is significantly smaller between freeze and go live, than trying to go live with entire environment. If timed well, the diff could be minuscule.
What they are describing is basically, live mirror the resource. Okay, that is fancy nice. Love to be able to do that. Some of us have a mildly chewed bubble gum, a foot of duct tape, and a shoestring.
dheera 1 days ago [-]
Yeah it depends on what the system is.
Lots of systems can tolerate a lot more downtime than the armchair VPs want them to have.
If people don't access to Instagram for 6 hours, the world won't end. Gmail or AWS S3 is a different story. Therefore Instagram should give their engineers a break and permit a migration with downtime. It makes the job a lot easier, requires fewer engineers and cost, and is much less likely to have bugs.
ksec 1 days ago [-]
Missing 2024 in the Title.
redwood 1 days ago [-]
Worth underlining that this is data migrations from one database server or system to another rather than schema migrations
clarabennett26 1 days ago [-]
[dead]
Rendered at 22:40:09 GMT+0000 (Coordinated Universal Time) with Vercel.
"Real world" being something that covers max what, 10 hours of a day? What about things that are used by the entire world? I think there is more than you realize of those sort of services underpinning the entire internet and the web, serving a global user base.
Well, for the remaining 0.1% - go ahead and use the fancy hot replication thingy. Sometimes there is no choice, and that's fine. Although that might mean, that the system architecture is busted.
Nobody is using slopwork’s new CrudX at a global scale.
??? I’ve worked in this software game for over 20 years. I’m yet to experience this “no need to worry about the globe”. I think you have the fallacy of thinking local experience is general experience.
There is a very large amount of b2b software out there that is serving multi-nationals of all types. Perhaps it is surprising, but there’s a large number of software solutions that aren’t that big, but still have customers in all the 4 corners.
In this age, many smaller companies serve customers across the globe. There is no common “asleep”.
Like B is slightly out of date ( replication wise ) the service modify something, then A comes with change that modify the same data that you just wrote.
How do you ensure that B is up to date without stopping write to A ( no downtime ).
https://github.com/vitessio/vitess
https://vitess.io/docs/reference/vreplication/
https://vitess.io/docs/reference/features/vtgate-buffering/
Have old database be master. Let new be a slave. Load in latest db dump, may take as long as it wants.
Then start replication and catch up on the delay.
You would need, depending on the db type, a load balancer/failover manager. PgBouncer and PgPoolII come to mind, but MySQL has some as well. Let that connect to the master and slave, connect the application to the database through that layer.
Then trigger a failover. That should be it.
400TB its about a week+ ?
> Then start replication and catch up on the delay.
Then u have a changes in the delay about +- 1TB. It means a changes syncing about few days more while changes still coming.
They said "current requests are buffered" which is impossible, especial for long distributed (optional) transactions which in a progress (it can spend a hours, days (for analitycs)).
Overwall this article is a BS or some super custom case which irrelevant for common systems. You can't migrate w/o downtime, it's a physical impossible.
"Take snapshot and begin streaming replication"... like to where? The snapshot isn't even prepared fully yet and definitely hasn't reached the target. Where are you dumping/keeping those replication logs for the time being?
Secondly, how are you managing database state changes due to realtime update queries? They are definitely going in source table at this point.
I don't get this. Im still stuck on point 1... have read it twice already.
https://github.com/vitessio/vitess
https://vitess.io/docs/reference/vreplication/
https://vitess.io/docs/reference/features/vtgate-buffering/
This work has been and is being used by some of the largest sites / apps in the world including Uber, Slack, GitHub, Square... But sure, "it's BS, super custom, and irrelevant". Gee, yer super smart! Thank you for the amazing insights. 5 stars.
VDiff (v2) only compares the source and destination at a specific point in time with resume only comparing rows with PK higher than the last one compared before it was paused. I assume this means:
1. VDiff doesn't catch updates to rows with PK lower than the point it was paused which could have become corrupt, and
2. VDiff doesn't continuously validate cdc changes meaning (unless you enforce extra downtime to run / resume a vdiff) you can never be 100% sure if your data is valid before SwitchTraffic
I'm curious if this is something customers even care about, or is point in time data validation sufficient enough to catch any issues that could occur during migrations?
But there's also nothing stopping you from doing a new VDiff to cover all data at that later point in time.
I think it's still the same issue where data modified after the VDiff point in time isn't validated before SwitchTraffic. I'm mostly curious how vitess users handle this case, or if any users even care about about this case in the first place?
Is there no demand for continuous data validation similar to what TiDB offers?
Do people who care about 100% correct data validation just accept the downtime required to run a full VDiff before SwitchTraffic?
https://github.com/vitessio/vitess
https://vitess.io/docs/reference/vreplication/
"isn't this just pushing the same issue forward in time?" I don't understand what you are trying to say here. You can only compare the two sides / databases at the same logical point in time. While you are doing this comparison at that point in time, the timeline continues to progress. Unless you want to stop the world and prevent writes for the full duration of the diff (which can be days or even weeks).
Going through their pricing (https://planetscale.com/pricing?engine=vitess&cluster=M-5120...), for just 15TB storage with RF=3, the pricing comes to around 24000 USD/MONTH, not year. Adjusted for 400TB and per year, this becomes 7.6 million usd. Of course, you also get a lot more, but, the difference is just insane.
That said at $24K you could pay off an entire server like that from Dell in 4 months despite Dell charging something stupid like $2000/TB.
The issue is your budget is for 400TB of data but minimal requests per second. That's a valid thing to consider, but it's extremely apples and oranges to a fleet of 75 high powered servers.
To put it a different way, their prices are pretty high but the calculation of powerful servers costing 40x as much as raw storage isn't "insane".
These larger ones are fully using the PlanetScale SaaS, but they are using Managed -- meaning that there are resources dedicated to and owned by them. You can read more about that here: https://planetscale.com/docs/vitess/managed
All of the PlanetScale features, including imports and online schema migrations or deployment requests (https://planetscale.com/docs/vitess/schema-changes/deploy-re...) are fully supported with PlaneScale Managed.
4.0 Freeze old system
4.1 Cut over application traffic to the new system.
4.2 merge any diff that happened between snapshot 1. and cutover 4.1
4.3 go live
to me, the above reduces the pressure on downtime because the merge is significantly smaller between freeze and go live, than trying to go live with entire environment. If timed well, the diff could be minuscule.
What they are describing is basically, live mirror the resource. Okay, that is fancy nice. Love to be able to do that. Some of us have a mildly chewed bubble gum, a foot of duct tape, and a shoestring.
Lots of systems can tolerate a lot more downtime than the armchair VPs want them to have.
If people don't access to Instagram for 6 hours, the world won't end. Gmail or AWS S3 is a different story. Therefore Instagram should give their engineers a break and permit a migration with downtime. It makes the job a lot easier, requires fewer engineers and cost, and is much less likely to have bugs.