I’ve found this article lacking. Like some other articles in this space, it introduces isolation levels through the lens of the phenomena described in the SQL standard, but I find that there’s a different, more intuitive approach.
I think it’s more tractable to define this problem space starting from the concept of (strict) serializability, which is really a generalization of the concept of thread safety. Every software engineer has an intuitive understanding of it. Lack of serializability can lead to execution-dependent behavior, which usually results in hard-to-diagnose bugs. Thus, all systems should strive towards serializability, and the database can be a tool in achieving it.
Various non-serializable levels of database transaction isolation are relaxations of the serializability guarantee, where the database no longer enforces the guarantee and it’s up to the database user to ensure it through other means.
The isolation phenomena are a useful tool for visualizing various corner cases of non-serializability, but they are not inherently tied to it. It's possible to achieve serializability while observing all of the SQL phenomena. For example, a Kubernetes cluster with carefully-written controllers can be serializable.
bddicken 1 days ago [-]
Author here. This is good feedback.
The combination of transactions, isolation levels, and MVCC is such a huge undertaking to cover all at once, specially when comparing how it's done across multiple DBs which I attempted here. Always a balance between technical depth, accessibility to people with less experience, and not letting it turn into an hour-long read.
libraryofbabel 1 days ago [-]
I actually like this article a lot. I do a bit of teaching, and I imagined the ideal audience for this as a smart junior engineer who knows SQL and has encountered transactions but maybe doesn’t really understand them yet. I think introducing things via examples of isolation anomalies (which most engineers will have seen examples of in bugs, even if they didn’t fully understand them) gives the explanation a lot more concreteness than starting with serializability as a theoretical concept as GP is proposing. Sure, strict serializability is a powerful idea that ties all this together and is more satisfying for an expert who already knows this stuff. But for someone who is just learning, you have to motivate it first.
If anything, I’d say it might be better to start with the lower isolation levels first, highlight the concurrency problems that can arise with them, and gradually introduce higher isolation levels until you get to serializability. That feels a bit more intuitive rather than downward progression from serializability to read uncommitted as presented here.
It also might be nice to see a quick discussion of why people choose particular isolation levels in practice, e.g. why you might make a tradeoff under high concurrency and give up serializability to avoid waits and deadlocks.
But excellent article overall, and great visualizations.
jaxr 1 days ago [-]
I love the work planetscale does on keeping this type of content accurate yet accessible. Keep it up!
Notation is useful. Citations are nice for further reading. But I don't agree more of this makes for a better article!
peterclary 1 days ago [-]
Looks like the author is geoblocking in protest of the UK Online Safety Act (and fair enough).
lateforwork 1 days ago [-]
Most RDBMSs offer serializable isolation if you need it. Often you don't need it. The downside of using serializable isolation unnecessarily is reduced concurrency and throughput due to increased coordination between transactions.
vvern 23 hours ago [-]
Still, I think it’s the right default to start with serializable. Then when you have performance issues you can think long and hard about whether relaxed isolation levels will work in a big free way. Better to start with a correct application.
lateforwork 23 hours ago [-]
Starting with serializable is not free, there is a coding cost to pay to handle all the concurrency errors.
magicalhippo 19 hours ago [-]
Sybase SQLAnywhere implements (or at least did) strict serialization by taking an exclusive row lock on all rows... which you can imagine scales horribly for a table with a reasonable row count.
I found out the hard way at work. I had assumed it took an exclusive lock on the table level only, the documentation didn't really spell out the details of how it enforced the serialized access.
I changed it to a retry loop which worked fine and was fairly easy to implement all things considered. Not gonna reach for strict serialization again unless I have to.
bddicken 1 days ago [-]
Yep. Its a wonderful capability to have for some situations, but for 90% of applications SERIALIZABLE isolation is overkill.
1 days ago [-]
mika6996 1 days ago [-]
Then recommend a better explanation?
ignoramous 1 days ago [-]
> concept of (strict) serializability [("S")], which is really a generalization of the concept of thread safety
Unsure why "strict" (L + S) is in braces: Linearizability ("L") is what resembles safety in SMP systems the most?
rmunn 21 hours ago [-]
One way to think about transactions, as I wrote in an earlier comment, would be to think of them as being like snapshots in a copy-on-write filesystem like btrfs or zfs. But another way to think of them is being like Git branches.
When you BEGIN a transaction, you're creating a branch in Git. Everyone else continues to work on the master branch, perhaps making their own branches (transactions) off of it while you're working. Every UPDATE command you run inside the transaction is a commit pushed to your branch. If you do a ROLLBACK, then you're deleting the branch unmerged, and its changes will be discarded without ever ending up in the master branch. But if you instead do a COMMIT, then that's a `git merge` command, and your changes will be merged into the master branch. If they merge cleanly, then all is well. If they do NOT merge cleanly, because someone else merged their own branch (committed their own transaction) that touched the same files that you touched (updated rows in the same table), then the DB will go through the file line by line (go through the table row by row) to try to get a clean merge. If it can successfully merge both changes without conflict, great. If it can't, then what happens depends on the transaction settings you chose. You can, when you start the transaction, tell the DB "If this doesn't merge cleanly, roll it back". Or you can say "If this doesn't merge cleanly, I don't care, just make sure it gets merged and I don't care if the conflict resolution ends up picking the "wrong" value, because for my use case there is no wrong value." This is like using "READ UNCOMMITTED" vs "SERIALIZABLE" transaction settings (isolation levels): you would use "READ UNCOMMITTED" if you don't care about merge conflicts in this particular table, and just want a quick merge. You would use "SERIALIZABLE" for tables with data that must, MUST, be correct, e.g. account balances. And there are two more levels in between for subtle differences in your use case's requirements.
As with my previous comment, this is probably obvious to 98.5% of people here. But maybe it'll help someone get that "ah-ha!" moment and understand transactions better.
shep101 17 hours ago [-]
concurr
zadikian 1 days ago [-]
Seems like a frequent surprise is that Postgres and MySQL don't default to serializable (so not fully I in ACID). They do read-committed. I didn't see this article mention that, but maybe I missed it. The article says read-committed provides "slightly" better performance, but it's been way faster in my experience. Forget where, but I think they said they chose this default for that reason.
Using read-committed ofc means having to keep locking details in mind. Like, UNIQUE doesn't just guard against bad data entry, it can also be necessary for avoiding race conditions. But now that I know, I'd rather do that than take the serializable performance hit, and also have to retry xacts and deal with the other caveats at the bottom of https://www.postgresql.org/docs/current/transaction-iso.html
benoitg 1 days ago [-]
Recent versions of MySQL and MariaDB default to repeatable-read for InnoDB tables, not read-commited :
I don't know about MyISAM though (who uses it anyway ;-) ).
layer8 1 days ago [-]
The issue with SERIALIZABLE, aside from performance, is that transactions can fail due to conflicts/deadlocks/timeouts, so application code must be prepared to recognize those cases and have a strategy to retry the transactions.
zadikian 1 days ago [-]
Right. So my code had a helper to run some inner func in a serializable xact, in rw or ro mode, which would retry with backoff. Like the TransactionRunner in Spanner. But even with no retries occurring, it was very slow.
twic 1 days ago [-]
VoltDB took this to an extreme - the way you interact with it is by sending it some code which does a mix of queries and logic, and it automatically retries the code as many times as necessary if there's a conflict. Because it all happens inside the DBMS, it's transparent and fast. I thought that was really clever.
I'm using the past tense here, but VoltDB is still going. I don't think it's as well-known as it deserves to be.
zadikian 21 hours ago [-]
Interesting. How is that faster than just having the code running on the same machine as the DB? Guess it could be smarter about conflicts than random backoff.
lateforwork 1 days ago [-]
> Postgres and MySQL don't default to serializable
Oracle and SQL Server also default to read committed, not serializable. Serializable looks good in text books but is rarely used in practice.
zadikian 1 days ago [-]
Yeah, the only examples I know of it being default are Spanner and Cockroach, which are for a different use case.
interlocutor 1 days ago [-]
A lot of database tools these days prioritize instant sharing of updates over transactions and ACID properties. Example: Airtable. As soon as you update a field the update shows up on your coworkers screen who also has the same table open. The downside of this is that Airtable doesn't do transactions. And the downside of not doing transactions is potentially dangerous data inconsistencies. More about that here: https://visualdb.com/blog/concurrencycontrol/
gtowey 23 hours ago [-]
Not so subtle product promo while taking a swipe at your competition.
rishabhaiover 1 days ago [-]
It's an absolute pleasure reading planetscale blogs. I'm curious about what tool is used to make these visualizations?
bddicken 1 days ago [-]
Author here. Thank you! These visuals are built with js + gsap (https://gsap.com)
I like to think of transactions, in an MVCC system like Postgres, as being like snapshots in copy-on-write filesystems like btrfs or zfs. When you BEGIN a transaction, the DB takes a snapshot of your data, so now there are two versions of the data, the snapshot (visible to everyone else) and the "private" version visible only to your transaction. Then as you run UPDATEs, the new data is written to the private copy, but everyone else continues to work with the snapshot. (And might be creating their own private copies for other transactions).
If you do a ROLLBACK, then your private copy of the data is discarded, and its changes never make it into the official copy. But if you do a COMMIT, then your private snapshot is made public and is the new, official, copy for everyone else to read from. (Except those who started a transaction before you ran COMMIT: they made their private copies from the older snapshot and don't have a copy of your changes).
This is probably obvious to nearly everyone here, but I figured I'd write it anyway. You never know who might read an analogy like this and have that lightbulb moment where it suddenly makes sense.
P.S. Another analogy would be Git branches, but I'll write that in a different comment.
Animats 20 hours ago [-]
Not quite. Databases use both branching and locking. Two transactions that conflict can cause one thread to block, rather than rolling back.
SELECT followed by an update is the most usual case for a block. (I have to code one today, and I want to see if I can rewrite it as one MySQL statement.)
rmunn 18 hours ago [-]
Yes, the analogy isn't perfect. I didn't want to get into all the subtleties in an introductory analogy, but I should probably have mentioned blocking. Too late for me to edit my post with your correction, though.
jascha_eng 1 days ago [-]
This actually used to be one of my favorite interview questions for backend engineers. Everyone has used transactions but depending on your seniority you'd understand it to different degrees.
And no I'd never expect people to know the isolation levels by heart, but if you know there are different ones and they behave differntly that's pretty good and tells me you are curious about how things work under the hood.
layer8 1 days ago [-]
The nominally same isolation levels can also behave differently on different database systems, so in general you have to investigate the details on a case-by-case basis anyway.
piskov 24 hours ago [-]
> A phantom read is one where a transaction runs the same SELECT multiple times, but sees different results the second time around
> Under the SQL standard, the repeatable read level allows phantom reads, though in Postgres they still aren't possible.
This is bad wording which could lead to an impression that a repeatable read may show different values. Values in rows will be the same but new rows may be added to the second result set. New rows is important as no previously read rows can be either changed or deleted — otherwise there will be no repetition for those rows second time around.
unkulunkulu 1 days ago [-]
> At this stage, it has nothing to do with xmin and xmax, but rather because other transactions cannot see uncommitted data
Am I missing something or this statement is incomplete? Also I find the visualization of commit weird, it “points to” the header of the table, but then xmax gets updated “behind the scenes”? Isnt xmax/xmin “the mechanism behind how the database knows what is committed/not committed”? Also, there could be subtransactions, which make this statement even more contradictory?
I enjoyed the visualizations and explanations otherwise, thanks!
CWIZO 1 days ago [-]
I also think the article glossed/skipped over the xmax/xmin concepts. And they are fundamental to understand how different isolation levels actually work. It's quite jarring to the point I'm wondering if a whole section got accidentally dropped from the article.
Quarrelsome 1 days ago [-]
I have learned about the beauty of predicate locks. That's such a sexy way of dealing with the issue instead of just blithely funneling all writes.
lasgawe 1 days ago [-]
We built an entire project for a client-side project with millions of SQL rows and thousands of users without adding a single transaction. :/
layer8 1 days ago [-]
If you have no explicit transactions, every insert/update is its own transaction (aka auto-commit). Depending on what you do, you might not need more. It’s still important to know that these execute as a transaction.
zadikian 1 days ago [-]
Yep, there have been times I get through a whole project without any explicit transactions. In fact it can be a sign of not fully normalized schema design if you rely on those a lot (which can ofc be fine if you deliberately wanted that).
bddicken 1 days ago [-]
These are still transactions! It's not uncommon for a large % of transactions in an OLTP workload to be only one query without explicit BEGIN / COMMIT; This is called an autocommit transactions or implicit transaction.
libraryofbabel 1 days ago [-]
How nice for you. But since you totally neglected to say anything about your use-case or schema or query patterns, it’s impossible to know what this even means. Some use cases can trivially be done without any explicit transactions and you’re not giving anything up. For others (usually, something where you need to enforce invariants under high concurrency writes or writes+reads on the same data across multiple tables), transactions are pretty critical. So, it depends.
ninjaoxygen 1 days ago [-]
Did you use SELECT FOR UPDATE at all, or just never had to update dependent data? If the complex operations are implemented using stored functions / procedures then the a transaction is implicit.
If the data is fairly straightforward like just one-to-many CRUD with no circular references, you would be able to do it without transactions, just table relationships would be enough to ensure consistency.
zabzonk 1 days ago [-]
I thought this was pretty good, not least because it attempts to explain isolation levels, something I always found pretty tricky when teaching SQL. Mind you, I was only teaching SQL, and so isolation, as part of C and C++ courses so that our clients could do useful stuff, but explaining what levels to use was always tuff.
zzzeek 1 days ago [-]
I think this is a great post to have but I'm going to make a critical usability suggestion:
* the videos should have "pause" and a "step at a time" control *
Even at the "half speed", without a deep knowledge of the context, the videos move way too fast for me to read the syntax that's invoking and line it up with the data on the left side. I (and im definitely not the only one) need to be able to sit on one step and stare at the whole thing without the latent anxiety of the state changing before I've had a chance to grok the whole thing.
this has nothing to do with familiarity with the concepts (read my profile). I literally need time to read all the words and connect them together mentally (ooh, just noticed this is pseudo-SQL syntax also, e.g. "select id=4", that probably added some load for me) without worrying they're going to change before watching things move.
please add a step-at-a-time button!
bddicken 1 days ago [-]
I appreciate this feedback, and then you read through it with enough rigor to notice.
johanyc 9 hours ago [-]
I second it. At the very least a pause button is needed.
cryptonector 1 days ago [-]
In the section about serializable read TFA gets the `accounts` `balance` wrong.
bone_tag 1 days ago [-]
Yes, I wasn't sure if I misunderstood the concept or if there was an error in the article.
bddicken 1 days ago [-]
Thanks, fixed!
nkzd 1 days ago [-]
Have you ever seen anyone changing transaction isolation levels in code? I think pessimistic or optimistic locking is preferred way to handle transaction concurrency.
sgarland 1 days ago [-]
I have recommended that various teams drop down to READ-COMMITTED (MySQL) for various actions to avoid gap locks being taken, but AFAIK no one has done so yet.
chamomeal 1 days ago [-]
never used planetscale but I’ve always liked their blog, and other content. One of the founders interviewed on the software engineering daily podcast and it was super interesting
nazwa 1 days ago [-]
Is it just me, or are the final results of the deadlock visualisations incorrect? In both animations (mysql/pg), the final `SELECT balance from account...` queries appear to show the result of the two sessions, which have been terminated.
bddicken 1 days ago [-]
Author here. You're right! I'm fixing now.
skeeter2020 1 days ago [-]
[flagged]
ksec 1 days ago [-]
>What's up with the trend of articles that seem like they're written by someone after a couple of sessions in a freshmen comp sci class?
This is funny and cracked me up. Because the author is actually the one who teaches CS in University.
>Nothing new here and a discussion of DB transactions without even mentioning ACID compliance and the trade-offs? You're better off picking up a 40 year old textbook than posts like this.
That would have been a very long blog post. Edit: I just realise Ben has already replied above.
bigstrat2003 24 hours ago [-]
A miserable little pile of queries! But enough talk, have at you!
Rendered at 23:00:52 GMT+0000 (Coordinated Universal Time) with Vercel.
I think it’s more tractable to define this problem space starting from the concept of (strict) serializability, which is really a generalization of the concept of thread safety. Every software engineer has an intuitive understanding of it. Lack of serializability can lead to execution-dependent behavior, which usually results in hard-to-diagnose bugs. Thus, all systems should strive towards serializability, and the database can be a tool in achieving it.
Various non-serializable levels of database transaction isolation are relaxations of the serializability guarantee, where the database no longer enforces the guarantee and it’s up to the database user to ensure it through other means.
The isolation phenomena are a useful tool for visualizing various corner cases of non-serializability, but they are not inherently tied to it. It's possible to achieve serializability while observing all of the SQL phenomena. For example, a Kubernetes cluster with carefully-written controllers can be serializable.
The combination of transactions, isolation levels, and MVCC is such a huge undertaking to cover all at once, specially when comparing how it's done across multiple DBs which I attempted here. Always a balance between technical depth, accessibility to people with less experience, and not letting it turn into an hour-long read.
If anything, I’d say it might be better to start with the lower isolation levels first, highlight the concurrency problems that can arise with them, and gradually introduce higher isolation levels until you get to serializability. That feels a bit more intuitive rather than downward progression from serializability to read uncommitted as presented here.
It also might be nice to see a quick discussion of why people choose particular isolation levels in practice, e.g. why you might make a tradeoff under high concurrency and give up serializability to avoid waits and deadlocks.
But excellent article overall, and great visualizations.
More notation, more citations, more better.
I found out the hard way at work. I had assumed it took an exclusive lock on the table level only, the documentation didn't really spell out the details of how it enforced the serialized access.
I changed it to a retry loop which worked fine and was fairly easy to implement all things considered. Not gonna reach for strict serialization again unless I have to.
Unsure why "strict" (L + S) is in braces: Linearizability ("L") is what resembles safety in SMP systems the most?
When you BEGIN a transaction, you're creating a branch in Git. Everyone else continues to work on the master branch, perhaps making their own branches (transactions) off of it while you're working. Every UPDATE command you run inside the transaction is a commit pushed to your branch. If you do a ROLLBACK, then you're deleting the branch unmerged, and its changes will be discarded without ever ending up in the master branch. But if you instead do a COMMIT, then that's a `git merge` command, and your changes will be merged into the master branch. If they merge cleanly, then all is well. If they do NOT merge cleanly, because someone else merged their own branch (committed their own transaction) that touched the same files that you touched (updated rows in the same table), then the DB will go through the file line by line (go through the table row by row) to try to get a clean merge. If it can successfully merge both changes without conflict, great. If it can't, then what happens depends on the transaction settings you chose. You can, when you start the transaction, tell the DB "If this doesn't merge cleanly, roll it back". Or you can say "If this doesn't merge cleanly, I don't care, just make sure it gets merged and I don't care if the conflict resolution ends up picking the "wrong" value, because for my use case there is no wrong value." This is like using "READ UNCOMMITTED" vs "SERIALIZABLE" transaction settings (isolation levels): you would use "READ UNCOMMITTED" if you don't care about merge conflicts in this particular table, and just want a quick merge. You would use "SERIALIZABLE" for tables with data that must, MUST, be correct, e.g. account balances. And there are two more levels in between for subtle differences in your use case's requirements.
As with my previous comment, this is probably obvious to 98.5% of people here. But maybe it'll help someone get that "ah-ha!" moment and understand transactions better.
Using read-committed ofc means having to keep locking details in mind. Like, UNIQUE doesn't just guard against bad data entry, it can also be necessary for avoiding race conditions. But now that I know, I'd rather do that than take the serializable performance hit, and also have to retry xacts and deal with the other caveats at the bottom of https://www.postgresql.org/docs/current/transaction-iso.html
https://dev.mysql.com/doc/refman/8.4/en/set-transaction.html...
https://mariadb.com/docs/server/reference/sql-statements/adm...
I don't know about MyISAM though (who uses it anyway ;-) ).
I'm using the past tense here, but VoltDB is still going. I don't think it's as well-known as it deserves to be.
Oracle and SQL Server also default to read committed, not serializable. Serializable looks good in text books but is rarely used in practice.
It goes into not only different isolation levels, but also some ambiguity in the traditional ACID definition.
I believe a 2nd edition is imminent.
If you do a ROLLBACK, then your private copy of the data is discarded, and its changes never make it into the official copy. But if you do a COMMIT, then your private snapshot is made public and is the new, official, copy for everyone else to read from. (Except those who started a transaction before you ran COMMIT: they made their private copies from the older snapshot and don't have a copy of your changes).
This is probably obvious to nearly everyone here, but I figured I'd write it anyway. You never know who might read an analogy like this and have that lightbulb moment where it suddenly makes sense.
P.S. Another analogy would be Git branches, but I'll write that in a different comment.
SELECT followed by an update is the most usual case for a block. (I have to code one today, and I want to see if I can rewrite it as one MySQL statement.)
And no I'd never expect people to know the isolation levels by heart, but if you know there are different ones and they behave differntly that's pretty good and tells me you are curious about how things work under the hood.
> Under the SQL standard, the repeatable read level allows phantom reads, though in Postgres they still aren't possible.
This is bad wording which could lead to an impression that a repeatable read may show different values. Values in rows will be the same but new rows may be added to the second result set. New rows is important as no previously read rows can be either changed or deleted — otherwise there will be no repetition for those rows second time around.
Am I missing something or this statement is incomplete? Also I find the visualization of commit weird, it “points to” the header of the table, but then xmax gets updated “behind the scenes”? Isnt xmax/xmin “the mechanism behind how the database knows what is committed/not committed”? Also, there could be subtransactions, which make this statement even more contradictory?
I enjoyed the visualizations and explanations otherwise, thanks!
If the data is fairly straightforward like just one-to-many CRUD with no circular references, you would be able to do it without transactions, just table relationships would be enough to ensure consistency.
* the videos should have "pause" and a "step at a time" control *
Even at the "half speed", without a deep knowledge of the context, the videos move way too fast for me to read the syntax that's invoking and line it up with the data on the left side. I (and im definitely not the only one) need to be able to sit on one step and stare at the whole thing without the latent anxiety of the state changing before I've had a chance to grok the whole thing.
this has nothing to do with familiarity with the concepts (read my profile). I literally need time to read all the words and connect them together mentally (ooh, just noticed this is pseudo-SQL syntax also, e.g. "select id=4", that probably added some load for me) without worrying they're going to change before watching things move.
please add a step-at-a-time button!
This is funny and cracked me up. Because the author is actually the one who teaches CS in University.
>Nothing new here and a discussion of DB transactions without even mentioning ACID compliance and the trade-offs? You're better off picking up a 40 year old textbook than posts like this.
That would have been a very long blog post. Edit: I just realise Ben has already replied above.