> The technical fix was embarrassingly simple: stop pushing to main every ten minutes.
Wait, you push straight to main?
> We added a rule — batch related changes, avoid rapid-fire pushes. It's in our CLAUDE.md (the governance file that all our AI agents follow):
> Avoid rapid-fire pushes to main — 11 pushes in 2h caused overlapping Kamal deploys with concurrent SQLite access.
Wait, you let _Claude_ push your e-commerce code straight to main which immediately results in a production deploy?
chasil 1 days ago [-]
This is the actual problem:
"Kamal runs blue-green deploys — it starts a new container, health-checks it, then stops the old one. During the switchover, both containers are running. Both mount ultrathink_storage. Both have the SQLite files open."
WAL mode requires shared access to System V IPC mapped memory. This is unlikely to work across containers.
I don't know much about Kamal but I'd look into ways of "pausing" traffic during a deploy - the trick where a proxy pretends that a request is taking another second to finish when it's actually held in the proxy while the two containers switch over.
Pausing requests then running two sqlites momentarily probably won’t prevent corruption. It might make it less likely and harder to catch in testing.
The easiest approach is to kill sqlite, then start the new one. I’d use a unix lockfile as a last-resort mechanism (assuming the container environment doesn’t somehow break those).
simonw 1 days ago [-]
I'm saying you pause requests, shut down one of the SQLite containers, start up the other one and un-pause.
Retr0id 1 days ago [-]
> I think you're exactly right about the WAL shared memory not crossing the container boundary.
I don't, fwiw (so long as all containers are bind mounting the same underlying fs).
Could the two containers in the OP have been running on separate filesystems, perhaps?
jmull 1 days ago [-]
I dug into this limitation a bit around a year ago on AWS, using a sqlite db stored on an EFS volume (I think it was EFS -- relying on memory here) and lambda clients.
Although my tests were slamming the db with reads and write I didn't induce a bad read or write using WAL.
But I wouldn't use experimental results to override what the sqlite people are saying. I (and you) probably just didn't happen to hit the right access pattern.
Retr0id 22 hours ago [-]
"the sqlite people" don't say anything that contradicts this
Retr0id 1 days ago [-]
Perhaps they're using NFS or something - which would give them issues regardless of container boundaries.
The containers would need to use a path on a shared FS to setup the SHM handle, and, even then, this sounds like the sort of thing you could probably break via arcane misconfiguration.
I agree shm should work in principle though.
PunchyHamster 1 days ago [-]
Not how SQLite works (any more)
> The wal-index is implemented using an ordinary file that is mmapped for robustness. Early (pre-release) implementations of WAL mode stored the wal-index in volatile shared-memory, such as files created in /dev/shm on Linux or /tmp on other unix systems. The problem with that approach is that processes with a different root directory (changed via chroot) will see different files and hence use different shared memory areas, leading to database corruption. Other methods for creating nameless shared memory blocks are not portable across the various flavors of unix. And we could not find any method to create nameless shared memory blocks on windows. The only way we have found to guarantee that all processes accessing the same database file use the same shared memory is to create the shared memory by mmapping a file in the same directory as the database itself.
chasil 1 days ago [-]
You might consider taking the database(s) out of WAL mode during a migration.
They tell you to use a proper FS, which is largely orthogonal to containerization.
jmull 1 days ago [-]
WAL relies on shared memory, so while a proper FS is necessary, it isn't going to help in this case.
fauigerzigerk 1 days ago [-]
Why does it not help if both containers can mmap the same -shm file?
jmull 1 days ago [-]
Shared memory across containers is a property of a containerization environment, not a property of a file system, "proper" or not.
Retr0id 22 hours ago [-]
It's a property of the filesystem, docker does not virtualize fs.
merb 1 days ago [-]
btw nfs that is mentioned here is fine in sync mode. However that is slow.
PunchyHamster 1 days ago [-]
> WAL mode requires shared access to System V IPC mapped memory.
Incorrect. It requires access to mmap()
"The wal-index is implemented using an ordinary file that is mmapped for robustness. Early (pre-release) implementations of WAL mode stored the wal-index in volatile shared-memory, such as files created in /dev/shm on Linux or /tmp on other unix systems. The problem with that approach is that processes with a different root directory (changed via chroot) will see different files and hence use different shared memory areas, leading to database corruption."
> This is unlikely to work across containers.
I'd imagine sqlite code would fail if that was the case; in case of k8s at least mounting same storage to 2 containers in most configurations causes K8S to co-locate both pods on same node so it should be fine.
It is far more likely they just fucked up the code and lost data that way...
NeXTstep?
(Leaving aside fun spitballing about whether Tahoe is morally OPENSTEP 26, and whether it was NeXT that actually bought Apple for negative $400 million...)
chasil 24 hours ago [-]
Alas, I never had access to any of the Next environments, until PPC MacOS.
I did hold a copy in my hands for 486-class machines in the college bookstore.
crabmusket 4 days ago [-]
Patient: doctor, my app loses data when I deploy twice during a 10 minute interval!
Doctor: simply do not do that
pavel_lishin 4 days ago [-]
Doctor: solution is simple, stop letting that stupid clown Pagliacci define how you do your work!
Patient: but doctor,
pjc50 1 days ago [-]
pAIgliacci: as a large language model, I am unable to experience live comedy.
I'm fairly confident they let it write the blog post too.
simonw 1 days ago [-]
"Not as a proof of concept. Not for a side project with three users. A real store" - suggestion for human writers, don't use "not X, not Y" - it carries that LLM smell whether or not you used an LLM.
xnorswap 1 days ago [-]
And that's just the opening paragraph, the full text is rounded off with:
"The constraint is real: one server, and careful deploy pacing."
Another strong LLM smell, "The <X> is real", nicely bookends an obviously generated blog-post.
These335 1 days ago [-]
You're absolutely right, this was an AI post
yokuze 9 hours ago [-]
I see what you did there XD
bombcar 1 days ago [-]
Hey, Apple still takes their store down during product launches!
pstuart 1 days ago [-]
I assumed that it was to ensure that the announced products were revealed in a controlled manner rather than because they aren't able to do updates to their product listings as a regular thing.
bombcar 1 days ago [-]
My reading of the tea leaves is it started out as the latter and continues as the former as part of the “mystique”.
littlestymaar 1 days ago [-]
> Wait, you let _Claude_ push your e-commerce code straight to main which immediately results in a production deploy?
Yikes. Thank you I'm not going to read “Lessons learned” by someone this careless.
66yatman 1 days ago [-]
The issue wasn’t done by the ai but their lack of architectural knowledge
okkdev 20 hours ago [-]
Goes hand in hand
whateveracct 15 hours ago [-]
stupid is as stupid does
tensegrist 1 days ago [-]
i hate to be so blunt but look around the site and then tell me you're surprised
burnt-resistor 1 days ago [-]
I suspect they don't wear helmets or seatbelts either. Sigh. The "I'm so proud and ignorant of unnecessarily risky behaviors" meme is tiring.
The Meta dev model of diff reviews merge into main (rebase style) after automated tests run is pretty good.
Also, staging and canary, gradual, exponential prod deployment/rollback approaches help derisk change too.
Finally, have real, tested backups and restore processes (not replicated copies) and ability to rollback.
infamia 4 days ago [-]
SQLite has a ".backup" command that you should always use to backup a SQLite DB. You're risking data loss/corruption using "cp" to backup your database as prescribed in the article.
Related, there is also sqlite3_rsync that lets you copy a live database to another (optionally) live database, where either can be on the network, accessed via ssh. A snapshot of the origin is used so writes can continue happening while the sqlite3_rsync is running. Only the differences are copied. The documentation is thorough:
"I know about the .backup command, there's no way I'm using cp to backup the SQLite db from production."
Oh.
Guess I know what I'm fixing before lunch. Thank you :)
warmwaffles 1 days ago [-]
Yes, especially if you are using a WAL.
qingcharles 24 hours ago [-]
Totally. It also explains why I was confused to find the WAL files when I was testing the backups last week.
anonzzzies 1 days ago [-]
Yeah, using cp to backup sqlite is a very bad idea. And yet, unless you know this, this is what Claude etc will implement for you. Every friggin' time.
hombre_fatal 1 days ago [-]
Well, humans also default to 'cp' until they learn the better pattern or find out their backup is missing data.
Also, my n=1 is that I told Claude to create a `make backup` task and it used .backup.
I don't understand the double standard though. Why do we pretend us humans are immaculate in these AI convos? If you had the prescience to be the guy who looked up how to properly back up an sqlite db, you'd have the prescience to get Claude to read docs. It's the same corner cut.
There's this weird contradiction where we both expect and don't expect AI to do anything well. We expect it to yolo the correct solution without docs since that's what we tried to make it do. And if it makes the error a human would make without docs, of course it did, it's just AI. Or, it shouldn't have to read docs, it's AI.
refulgentis 21 hours ago [-]
You're confusing a workman's winking complaint about their tool, with, being unfair by not treating AI like a human.
hombre_fatal 6 hours ago [-]
I'm making a general observation about this frequent genre of complaint.
refulgentis 3 hours ago [-]
And I'm lucky enough to be making an observation about your general observation about this frequent genre of complaint
hombre_fatal 22 minutes ago [-]
I don't get what you're trying to say then.
HoldOnAMinute 22 hours ago [-]
It works fine as long as no one is writing to the sqlite file and you are not in WAL mode, which is not the default.
anonzzzies 1 hours ago [-]
But we were talking server loads here: anyone runs sqlite server-side not in WAL mode?
chasil 1 days ago [-]
It's fine if you run the equivalent of "init 1" first.
Does your OS have a single-user mode?
BartjeD 1 days ago [-]
The bottom part of the article mentions they use .backup - did they add that later or did you miss it?
BartjeD 1 days ago [-]
The post now says they changed it due to feedback from Hacker news. All good.
crazygringo 1 days ago [-]
> Would We Choose SQLite Again? Yes. For a single-server deployment with moderate write volume, SQLite eliminates an entire category of infrastructure complexity. No connection pool tuning. No database server upgrades. No replication lag.
These are weird reasons. You can just install Postgres or MySQL locally too. Connection pool tuning certainly isn't anything you have to worry about for a moderate write volume. You don't ever need to upgrade the database if you don't want to, since you're not publicly exposing it. There's obviously no replication lag if you're not replicating, which you wouldn't be with a single server.
The reason you don't usually choose SQLite for the web is future-proofing. If you're totally sure you'll always stay single-server forever, then sure, go for it. But if there's even a tiny chance you'll ever need to expand to multiple web servers, then you'll wish you'd chosen a client-server database from the start. And again, you can run Postgres/MySQL locally, on even the tiniest cheapest VPS, basically just as easily as using SQLite.
kaibee 1 days ago [-]
Yeah a PG Docker container is basically magic. I too went down a rabbit-hole of trying to setup a write-heavy SQLite thing because my job is still using CentOS6 on their AWS cluster (don't ask). Once I finally got enough political capital to get my own EC2 box I could put a PG docker container on, so much nonsense I was doing just evaporated.
NewEntryHN 1 days ago [-]
It's a spectrum. Installing Postgres locally is not 100% future-proofing since you'll still need to migrate your local Postgres to a central Postres. Using Sqlite is not 0% future-proofing since it's still using the SQL standard.
If the only argument for a piece of tech in comparison to another one is "future-proofing", that's pretty much acknowledging the other one is simpler to setup and maintain.
crazygringo 1 days ago [-]
> It's a spectrum.
For web servers specifically, no, SQLite is not generally part of that spectrum. That makes as much sense as saying that in a kitchen, you want a spectrum of knives from Swiss Army Knives to chef's knives. No -- Swiss Army Knives are not part of the spectrum. For web servers, you do have a wide spectrum of database options from single servers to clusters to multi-region clusters, along with many other choices. But SQLite is not generally part of that spectrum, because it's not client-server.
> since you'll still need to migrate your local Postgres to a central Postres
No you don't. You leave your DB in-place and turn off the web server part. Or even if you do want to migrate to something beefier when needed, it's basically as easy as copying over a directory. It's nothing compared to migrating from SQLite to Postgres.
> since it's still using the SQL standard.
No, every variant of SQL is different. You'll generally need to review every single query to check what needs rewriting. Features in one database work differently from in another. Most of the basic concepts are the same, and the basic syntax is the same, but the intermediate and advanced concepts can have both different features and different syntax. Not to mention sometimes wildly different performance that needs to be re-analyzed.
> that's pretty much acknowledging the other one is simpler to setup and maintain.
No it's not. What logic led you there...? They're basically equally simple to set up and maintain, but one also scales while the other doesn't. That's the point.
The main advantage of SQLite has nothing to do with setup and maintenance, but rather the fact that it is file-based and can be integrated into the binary of other applications, which makes it amazing for locally embedded databases used by user-installed applications. But these aren't advantages when you're running a server. And it becomes a problem when you need to scale to multiple webservers.
pullshark91 1 days ago [-]
OMG, you just killed it.
MattRogish 5 hours ago [-]
Yeah, the cost - both operationally and coding-wise - of running pgsql in some cloud dwarfs the cost of lost orders. "We'll just deploy less often" is tribal knowledge that will absolutely be forgotten at some point and maybe there'll be more than two lost orders. Just setup postgresql.
runako 1 days ago [-]
Have run PG, MySQL, and SQLite locally for production sites. Backups are much more straightforward for SQLite. They are running Kamal, which means "just install Postgres" would also likely mean running PG in a container, which has its own idiosyncrasies.
SQLite is not a terrible choice here.
crazygringo 1 days ago [-]
> Backups are much more straightforward for SQLite.
Not sure how? All of them can be backed up with a single command. But if you want live backups (replication) as opposed to daily or hourly, SQLite is the only one that doesn't support that.
MitziMoto 16 hours ago [-]
Litestream exists?
crazygringo 5 hours ago [-]
That's a third-party tool. It's not part of SQLite.
And it's a pretty hacky usage of the WAL. If it works for you, great, but if I need replication, I'm going to want a database that supports it natively.
xnorswap 1 days ago [-]
Yeah, it's weird "they" don't consider any middle ground between SQLite and replicated postgres cluster.
Locally running database servers are massively underrated as a working technology for smaller sites. You can even easily replicate it to another server for resiliency while keeping the local performance.
talkingtab 1 days ago [-]
This. Spinning up Postgresql is easy once you know how. Just as SQLITE3 is easy once you know how. But I can find no benefit from not just learning postgres the first time around.
kaibee 1 days ago [-]
They're using AI Agents to do it in either case and using docker. There was no reason to choose SQLite.
cadamsdotcom 4 days ago [-]
The fix appears to nicely asking the forgetful unreliable agent to please (very closely pretty please!) follow the deploy instructions (and also please never hallucinate or mess up, because statistics tells us an entity with no long term memory and no incentive to get everything right will do the job right 99.99999999% of the time, which is good enough to run an eshop) not deploy too often per hour.
With one simple instruction the system (99.9999% of the time) gains the handy property that “only” two processes end up with the database files open at once.
Thanks for the vibes!
devmor 1 days ago [-]
I have to work with agents as a part of my job and the very first thing I did when writing MCP tools for my workflow was to ensure they were read only or had a deterministic, hardcoded stopgap that evaluates the output.
I do not understand the level of carelessness and lack of thinking displayed in the OP.
mywittyname 1 days ago [-]
Even just having the agent write scripts to disk and run those works wonders. It keeps the agent from having to rebuild a script for the same tasks, etc.
devmor 1 days ago [-]
That too! Every time the agent does something I didn't intend, I end up making a tool or process guidance to prevent it from happening again. Not just add "don't do that" to the context.
jmull 4 days ago [-]
Redis, four dbs, container orchestration for a site of this modest scope… generated blog posts.
Our AI future is a lot less grand than I expected.
ramon156 1 days ago [-]
How else will you get all those resume entries ! (/j)
add-sub-mul-div 1 days ago [-]
Ironically, AI de-skilling results in a robust-sounding resume.
sgbeal 4 days ago [-]
> json_extract returns native types. json_extract(data, '$.id') returns an integer if the value was stored as a number. Comparing it to a string silently fails. Always CAST(json_extract(...) AS TEXT) when you need string comparison.
I took three weeks off from tech, read books from last century, and travelled Europe. Coming back, reading LLM generated content and code feels like nails on a chalkboard. Taste, it does not have taste.
PunchyHamster 1 days ago [-]
It is so tiring...
literallyroy 1 days ago [-]
It’s strange how easy it is to spot.
sgarland 1 days ago [-]
> The sqlite_sequence table is the most underappreciated debugging tool in SQLite. It tracks the highest auto-increment value ever assigned for each table — even if that row was subsequently lost.
> WorkQueueTask.count returns ~300 (current rows). The sequence shows 3,700+ (every task ever created). If those numbers diverge unexpectedly, something deleted rows it shouldn't have.
Or it means that SQLite is exhibiting some of its "maybe I will, maybe I won't" behavior [0]:
> Note that "monotonically increasing" does not imply that the ROWID always increases by exactly one. One is the usual increment. However, if an insert fails due to (for example) a uniqueness constraint, the ROWID of the failed insertion attempt might not be reused on subsequent inserts, resulting in gaps in the ROWID sequence. AUTOINCREMENT guarantees that automatically chosen ROWIDs will be increasing but not that they will be sequential.
> No ILIKE. PostgreSQL developers reach for WHERE name ILIKE '%term%' instinctively. SQLite throws a syntax error. Use WHERE LOWER(name) LIKE '%term%' instead.
You should not be reaching for ILIKE, functions on predicates, or leading wildcards unless you're aware of the impacts those have on indexing.
> json_extract returns native types. json_extract(data, '$.id') returns an integer if the value was stored as a number. Comparing it to a string silently fails. Always CAST(json_extract(...) AS TEXT) when you need string comparison.
If you're using strings embedded in JSON as predicates, you're going to have a very bad time when you get more than a trivial number of rows in the table.
please consider writing it yourself. quirks in human writing is infinitely more interesting than a next-token-predicted 500 word piece
NewsaHackO 1 days ago [-]
But then how would they get people to buy their $99 AI CEO package?
pullshark91 1 days ago [-]
Huh, and here I thought it was a joke...
NewsaHackO 1 days ago [-]
Maybe it is, didn't really look into it.
bob1029 12 hours ago [-]
I think SQLite is fantastic but it does start to fall apart at the edges sometimes.
What is more interesting to me is the fact that everyone seems to think of Postgres as the obvious alternative to SQLite. It is certainly an alternative. For me, the most opposite thing of SQLite is something like Oracle or MSSQL.
The complexity being relatively constant is the part I care about most here. Running a paid, COTS database engine on a blessed OS tends to be a little bit easier than an OSS solution that can run on toasters and drones. Especially, if you are using replication, high availability, etc.
The business liability coverage seems to track proportionally with how much money you spend on the solution. SQLite offers zero guarantees accordingly. You don't have a support contract or an account manager you can get upset with. Depending on the nature of the business this could be preferable or adverse. It really depends.
For serious regulated business with oppressive audit cycles, SQLite trends toward liability more than asset if it's being used as a system of record. That it merely works and performs well is often not sufficient for acceptance. I'm not saying that Postgres isn't capable of passing an intense audit, but I am saying that it might be easier to pass it if you used MSSQL. The cost of having your staff tied up with compliance should be considered when making technology choices in relevant businesses.
throwaway173738 8 hours ago [-]
Most of us don’t work in businesses where indemnity coverage is more important than licensing costs.
bob1029 7 hours ago [-]
I agree these exist but I don't know about "most". Licensing costs are almost always a drop in the bucket compared to things like your salary.
adobrawy 1 days ago [-]
If the problem is excessive deployments via GitHub Actions, why not use concurrency control on GitHub Actions ( https://docs.github.com/en/actions/how-tos/write-workflows/c... ) instead of relying on agent randomness and the hope that it won't make the same mistake again? Am I missing something?
politelemon 4 days ago [-]
> embarrassingly simple
This is becoming the new overused LLM goto expression for describing basic concepts.
kristiandupont 1 days ago [-]
SQLite is a rock solid piece of software that offers a great value prop: in-process database. For locally running apps (desktop or mobile), this makes perfect sense.
However, I genuinely don't see the appeal when you are in a client/server environment. Spinning up Postgres via a container is a one-liner and equally simple for tests (via testcontainers or pglite). The "simple" type system of SQLite feels like nothing but a limitation to me.
jszymborski 4 days ago [-]
The LLM prose are grating read. I promise, you'd do a better job yourself.
littlestymaar 1 days ago [-]
Given how dumb their workflow is (let Claude Code push directly to production without supervision) I'm not so sure.
jmull 1 days ago [-]
I don't know if it's just me, but this whole post seems to have time traveled forward from about 3-4 days ago.
It's not just a repost. The thread includes a comment I made at the time which now from "1 hour ago".
Makes me wonder if it's an honest bug or someone has hacked the hacker news front page to sell their t-shits, mugs, and AI starter kits.
Retr0id 1 days ago [-]
It's an artefact of the "second chance pool" mechanism.
worksonmine 1 days ago [-]
Interesting choice to change the time of the comment, a deja-vu can be weird enough without staring at a comment with a recent timestamp.
mattrighetti 1 days ago [-]
I see tons of articles like this, and I have no doubt sqlite proved to be a great piece of software in production environments, but what I rarely find discussed is that we lack tools that enable you to access and _maintain_ SQLite databases.
It's so convenient to just open Datagrip and have a look at all my PostgreSQL instances; that's not possible with sqlite AFAIK (not even SSH tunnelling?). If something goes wrong, you have to SSH into the machine and use raw SQL. I know there are some cool front-end interfaces to inspect the db but it requires more setup than you'd expect.
I think that most people give up on sqlite for this reason and not because of its performance.
simonw 1 days ago [-]
I have a project to help with that:
uvx datasette data.db
That starts a web app on port 8001 that looks like this:
I've a busy app, i just deploy to canary. And use loadbalancer to move 5% traffic to it, i observe how it reacts and then rollout the canary changes to all.
how hard and complex is it to roll out postgres?
pezh0re 4 days ago [-]
Not hard at all - geerlingguy has a great Ansible role and there are a metric crapton of guides pre-AI/2022 that cover gardening.
fbuilesv 24 hours ago [-]
> The Fix: Stop Deploying So Fast
I don't know how Ultrathink works, and I have no "real world" experience with Kamal, but I find it intriguing to see someone consider 11 deployments in 2 hours to be "fast".
Instead of handicapping yourself, fix your deployment pipeline, 10 min deploys are not OK for an online store.
rienbdj 1 days ago [-]
A well designed system shouldn’t drop orders?
If you perform at least once processing then use Stripe idempotency keys you avoid such issues?
heikkilevanto 1 days ago [-]
I use SqLite for a small hobby project, fine for that. Wanted to read the article to see why I should not, but it attacked me with a "subscribe" popup, so I stopped there. The comments here seem to be based on daydreaming on scaling to a lot of users who need 24/7 uptime, which is not always the case.
PunchyHamster 1 days ago [-]
> Yes. For a single-server deployment with moderate write volume, SQLite eliminates an entire category of infrastructure complexity. No connection pool tuning. No database server upgrades. No replication lag.
None of these is needed if you run sqlite sized workloads...
I like SQLite but right tools for right jobs... tho data loss is most likely code bug
nop_slide 1 days ago [-]
I still haven't figured out a good way to due blue/green sqlite deploys on fly.io. Is this just a limitation of using sqlite or using Fly? I've been very happy with sqlite otherwise, rather unsure how to do a cutover to a new instance.
Anyone have some docs on how to cutover gracefully with sqlite on other providers?
wolttam 1 days ago [-]
You accept downtime. That's the limitation of SQLite.
Or you use some distributed SQLite tool like rqlite, etc
nop_slide 1 days ago [-]
I'm personally fine with a little bit of downtime for my particular small app. I'm just surprised there's not a more detailed story around deploying sqlite in a high availability prod environment given it's increased popularity and coverage over the last few years. Especially surprising with Rails' (my stack) going full "sqlite-first".
wolttam 24 hours ago [-]
The "sqlite-first" folks have accepted that a bit of downtime is better than engineering wildly complex systems that avoid it, for non-mission-critical apps (if your mission is a low volume e-commerce shop.. it's not critical)
1 days ago [-]
leosanchez 4 days ago [-]
> Backups are cp production.sqlite3 backup.sqlite3
I use gobackup[0] as another container in compose.yml file which can backup to multiple locations.
Does cp actually work on live sqlite files? I wouldn’t expect it to, since cp does not create a crash-consistent snapshot.
sgbeal 1 days ago [-]
> Does cp actually work on live sqlite files? I wouldn’t expect it to, since cp does not create a crash-consistent snapshot.
cp "works" but it has a very strong possibility of creating a corrupt copy (the more active the db, the higher the chance of corruption). Anyone using "cp" for that purpose does not have a reliable backup.
sqlite3_rsync and SQLite's "vacuum into" exist to safely create backups of live databases.
66yatman 23 hours ago [-]
Maybe if the system is idle
siruwastaken 1 days ago [-]
Am I the only one finding this article highly suspect? It seems like the errors made are so basic, i.e. using the wrong SQL dialect for the db system in use, and there orders were apparently only at 17?
trelliumD 1 days ago [-]
could have used firebird embedded, also a simple deployment such as sqllite, but better concurrency and more complete system, also a tad faster
mt42or 1 days ago [-]
NIH syndrome, almost mental health issues.
elliot07 21 hours ago [-]
You should use Shopify and focus on the store part of the business, not the SQLite part :)
MagicMoonlight 1 days ago [-]
Slopcoded article for a Slopcoded website
66yatman 1 days ago [-]
Just use a 4gb server and install Postgres
NicoJuicy 1 days ago [-]
If Nico send him an email. The AI CEO should take his offer.
nonameiguess 23 hours ago [-]
How the hell did this get so much engagement, let alone as a repost? Is "SQLite" in the title really all it takes? This site was registered 8 months ago, the whole blog started in February, the first post declares its writer to be an AI CEO, most posts are about hiring and managing other AI agents, it claims to sell everything from coffee mugs to services from itself to also be the CEO of your business. This feels more like performance art than a business. There's no evidence they've sold anything or even have any actual inventory. AI agents can't build you a warehouse or manufacture physical goods.
You guys are arguing with a bot, in a way almost arguing with yourselves, as it may very well not have actually done any of this, is definitely not running a "real store," and is seemingly publishing posts that are a parody of Hacker News style founder journeys but if the founders were bots.
stock-parrot 13 hours ago [-]
You are absolutely right! Everything feels off and done for engagement
Is this what we can expect in the near future?
mergisi 10 hours ago [-]
[dead]
pgideas 1 days ago [-]
[dead]
ryguz 1 days ago [-]
[dead]
minutesmith 1 days ago [-]
[flagged]
Rendered at 20:29:13 GMT+0000 (Coordinated Universal Time) with Vercel.
Wait, you push straight to main?
> We added a rule — batch related changes, avoid rapid-fire pushes. It's in our CLAUDE.md (the governance file that all our AI agents follow):
> Avoid rapid-fire pushes to main — 11 pushes in 2h caused overlapping Kamal deploys with concurrent SQLite access.
Wait, you let _Claude_ push your e-commerce code straight to main which immediately results in a production deploy?
"Kamal runs blue-green deploys — it starts a new container, health-checks it, then stops the old one. During the switchover, both containers are running. Both mount ultrathink_storage. Both have the SQLite files open."
WAL mode requires shared access to System V IPC mapped memory. This is unlikely to work across containers.
In case anybody needs a refresher:
https://en.wikipedia.org/wiki/Shared_memory
https://en.wikipedia.org/wiki/CB_UNIX
https://www.ibm.com/docs/en/aix/7.1.0?topic=operations-syste...
I think you're exactly right about the WAL shared memory not crossing the container boundary. EDIT: It looks like WAL works fine across Docker boundaries, see https://news.ycombinator.com/item?id=47637353#47677163
I don't know much about Kamal but I'd look into ways of "pausing" traffic during a deploy - the trick where a proxy pretends that a request is taking another second to finish when it's actually held in the proxy while the two containers switch over.
From https://kamal-deploy.org/docs/upgrading/proxy-changes/ it looks like Kamal 2's new proxy doesn't have this yet, they list "Pausing requests" as "coming soon".
The easiest approach is to kill sqlite, then start the new one. I’d use a unix lockfile as a last-resort mechanism (assuming the container environment doesn’t somehow break those).
I don't, fwiw (so long as all containers are bind mounting the same underlying fs).
Could the two containers in the OP have been running on separate filesystems, perhaps?
Although my tests were slamming the db with reads and write I didn't induce a bad read or write using WAL.
But I wouldn't use experimental results to override what the sqlite people are saying. I (and you) probably just didn't happen to hit the right access pattern.
https://sqlite.org/wal.html
The containers would need to use a path on a shared FS to setup the SHM handle, and, even then, this sounds like the sort of thing you could probably break via arcane misconfiguration.
I agree shm should work in principle though.
> The wal-index is implemented using an ordinary file that is mmapped for robustness. Early (pre-release) implementations of WAL mode stored the wal-index in volatile shared-memory, such as files created in /dev/shm on Linux or /tmp on other unix systems. The problem with that approach is that processes with a different root directory (changed via chroot) will see different files and hence use different shared memory areas, leading to database corruption. Other methods for creating nameless shared memory blocks are not portable across the various flavors of unix. And we could not find any method to create nameless shared memory blocks on windows. The only way we have found to guarantee that all processes accessing the same database file use the same shared memory is to create the shared memory by mmapping a file in the same directory as the database itself.
That would eliminate the need for shared memory.
See more: https://sqlite.org/wal.html#concurrency
Incorrect. It requires access to mmap()
"The wal-index is implemented using an ordinary file that is mmapped for robustness. Early (pre-release) implementations of WAL mode stored the wal-index in volatile shared-memory, such as files created in /dev/shm on Linux or /tmp on other unix systems. The problem with that approach is that processes with a different root directory (changed via chroot) will see different files and hence use different shared memory areas, leading to database corruption."
> This is unlikely to work across containers.
I'd imagine sqlite code would fail if that was the case; in case of k8s at least mounting same storage to 2 containers in most configurations causes K8S to co-locate both pods on same node so it should be fine.
It is far more likely they just fucked up the code and lost data that way...
Why not?
Some that I used that are gone... Ultrix (MIPS), Clix, Irix, SunOS 4, SCO OpenServer, TI System V.
https://en.wikipedia.org/wiki/Ultrix
https://en.wikipedia.org/wiki/Intergraph
I did hold a copy in my hands for 486-class machines in the college bookstore.
Doctor: simply do not do that
Patient: but doctor,
"The constraint is real: one server, and careful deploy pacing."
Another strong LLM smell, "The <X> is real", nicely bookends an obviously generated blog-post.
Yikes. Thank you I'm not going to read “Lessons learned” by someone this careless.
The Meta dev model of diff reviews merge into main (rebase style) after automated tests run is pretty good.
Also, staging and canary, gradual, exponential prod deployment/rollback approaches help derisk change too.
Finally, have real, tested backups and restore processes (not replicated copies) and ability to rollback.
https://sqlite.org/cli.html#special_commands_to_sqlite3_dot_...
https://sqlite.org/rsync.html
Oh.
Guess I know what I'm fixing before lunch. Thank you :)
Also, my n=1 is that I told Claude to create a `make backup` task and it used .backup.
I don't understand the double standard though. Why do we pretend us humans are immaculate in these AI convos? If you had the prescience to be the guy who looked up how to properly back up an sqlite db, you'd have the prescience to get Claude to read docs. It's the same corner cut.
There's this weird contradiction where we both expect and don't expect AI to do anything well. We expect it to yolo the correct solution without docs since that's what we tried to make it do. And if it makes the error a human would make without docs, of course it did, it's just AI. Or, it shouldn't have to read docs, it's AI.
Does your OS have a single-user mode?
These are weird reasons. You can just install Postgres or MySQL locally too. Connection pool tuning certainly isn't anything you have to worry about for a moderate write volume. You don't ever need to upgrade the database if you don't want to, since you're not publicly exposing it. There's obviously no replication lag if you're not replicating, which you wouldn't be with a single server.
The reason you don't usually choose SQLite for the web is future-proofing. If you're totally sure you'll always stay single-server forever, then sure, go for it. But if there's even a tiny chance you'll ever need to expand to multiple web servers, then you'll wish you'd chosen a client-server database from the start. And again, you can run Postgres/MySQL locally, on even the tiniest cheapest VPS, basically just as easily as using SQLite.
If the only argument for a piece of tech in comparison to another one is "future-proofing", that's pretty much acknowledging the other one is simpler to setup and maintain.
For web servers specifically, no, SQLite is not generally part of that spectrum. That makes as much sense as saying that in a kitchen, you want a spectrum of knives from Swiss Army Knives to chef's knives. No -- Swiss Army Knives are not part of the spectrum. For web servers, you do have a wide spectrum of database options from single servers to clusters to multi-region clusters, along with many other choices. But SQLite is not generally part of that spectrum, because it's not client-server.
> since you'll still need to migrate your local Postgres to a central Postres
No you don't. You leave your DB in-place and turn off the web server part. Or even if you do want to migrate to something beefier when needed, it's basically as easy as copying over a directory. It's nothing compared to migrating from SQLite to Postgres.
> since it's still using the SQL standard.
No, every variant of SQL is different. You'll generally need to review every single query to check what needs rewriting. Features in one database work differently from in another. Most of the basic concepts are the same, and the basic syntax is the same, but the intermediate and advanced concepts can have both different features and different syntax. Not to mention sometimes wildly different performance that needs to be re-analyzed.
> that's pretty much acknowledging the other one is simpler to setup and maintain.
No it's not. What logic led you there...? They're basically equally simple to set up and maintain, but one also scales while the other doesn't. That's the point.
The main advantage of SQLite has nothing to do with setup and maintenance, but rather the fact that it is file-based and can be integrated into the binary of other applications, which makes it amazing for locally embedded databases used by user-installed applications. But these aren't advantages when you're running a server. And it becomes a problem when you need to scale to multiple webservers.
SQLite is not a terrible choice here.
Not sure how? All of them can be backed up with a single command. But if you want live backups (replication) as opposed to daily or hourly, SQLite is the only one that doesn't support that.
And it's a pretty hacky usage of the WAL. If it works for you, great, but if I need replication, I'm going to want a database that supports it natively.
Locally running database servers are massively underrated as a working technology for smaller sites. You can even easily replicate it to another server for resiliency while keeping the local performance.
With one simple instruction the system (99.9999% of the time) gains the handy property that “only” two processes end up with the database files open at once.
Thanks for the vibes!
I do not understand the level of carelessness and lack of thinking displayed in the OP.
Our AI future is a lot less grand than I expected.
More simply:
vs:Or it means that SQLite is exhibiting some of its "maybe I will, maybe I won't" behavior [0]:
> Note that "monotonically increasing" does not imply that the ROWID always increases by exactly one. One is the usual increment. However, if an insert fails due to (for example) a uniqueness constraint, the ROWID of the failed insertion attempt might not be reused on subsequent inserts, resulting in gaps in the ROWID sequence. AUTOINCREMENT guarantees that automatically chosen ROWIDs will be increasing but not that they will be sequential.
> No ILIKE. PostgreSQL developers reach for WHERE name ILIKE '%term%' instinctively. SQLite throws a syntax error. Use WHERE LOWER(name) LIKE '%term%' instead.
You should not be reaching for ILIKE, functions on predicates, or leading wildcards unless you're aware of the impacts those have on indexing.
> json_extract returns native types. json_extract(data, '$.id') returns an integer if the value was stored as a number. Comparing it to a string silently fails. Always CAST(json_extract(...) AS TEXT) when you need string comparison.
If you're using strings embedded in JSON as predicates, you're going to have a very bad time when you get more than a trivial number of rows in the table.
0: https://sqlite.org/autoinc.html
please consider writing it yourself. quirks in human writing is infinitely more interesting than a next-token-predicted 500 word piece
What is more interesting to me is the fact that everyone seems to think of Postgres as the obvious alternative to SQLite. It is certainly an alternative. For me, the most opposite thing of SQLite is something like Oracle or MSSQL.
The complexity being relatively constant is the part I care about most here. Running a paid, COTS database engine on a blessed OS tends to be a little bit easier than an OSS solution that can run on toasters and drones. Especially, if you are using replication, high availability, etc.
The business liability coverage seems to track proportionally with how much money you spend on the solution. SQLite offers zero guarantees accordingly. You don't have a support contract or an account manager you can get upset with. Depending on the nature of the business this could be preferable or adverse. It really depends.
For serious regulated business with oppressive audit cycles, SQLite trends toward liability more than asset if it's being used as a system of record. That it merely works and performs well is often not sufficient for acceptance. I'm not saying that Postgres isn't capable of passing an intense audit, but I am saying that it might be easier to pass it if you used MSSQL. The cost of having your staff tied up with compliance should be considered when making technology choices in relevant businesses.
This is becoming the new overused LLM goto expression for describing basic concepts.
However, I genuinely don't see the appeal when you are in a client/server environment. Spinning up Postgres via a container is a one-liner and equally simple for tests (via testcontainers or pglite). The "simple" type system of SQLite feels like nothing but a limitation to me.
It's not just a repost. The thread includes a comment I made at the time which now from "1 hour ago".
Makes me wonder if it's an honest bug or someone has hacked the hacker news front page to sell their t-shits, mugs, and AI starter kits.
It's so convenient to just open Datagrip and have a look at all my PostgreSQL instances; that's not possible with sqlite AFAIK (not even SSH tunnelling?). If something goes wrong, you have to SSH into the machine and use raw SQL. I know there are some cool front-end interfaces to inspect the db but it requires more setup than you'd expect.
I think that most people give up on sqlite for this reason and not because of its performance.
https://latest.datasette.io/fixtures
how hard and complex is it to roll out postgres?
I don't know how Ultrathink works, and I have no "real world" experience with Kamal, but I find it intriguing to see someone consider 11 deployments in 2 hours to be "fast".
Instead of handicapping yourself, fix your deployment pipeline, 10 min deploys are not OK for an online store.
If you perform at least once processing then use Stripe idempotency keys you avoid such issues?
None of these is needed if you run sqlite sized workloads...
I like SQLite but right tools for right jobs... tho data loss is most likely code bug
Anyone have some docs on how to cutover gracefully with sqlite on other providers?
Or you use some distributed SQLite tool like rqlite, etc
I use gobackup[0] as another container in compose.yml file which can backup to multiple locations.
[0]: https://gobackup.github.io/
cp "works" but it has a very strong possibility of creating a corrupt copy (the more active the db, the higher the chance of corruption). Anyone using "cp" for that purpose does not have a reliable backup.
sqlite3_rsync and SQLite's "vacuum into" exist to safely create backups of live databases.
You guys are arguing with a bot, in a way almost arguing with yourselves, as it may very well not have actually done any of this, is definitely not running a "real store," and is seemingly publishing posts that are a parody of Hacker News style founder journeys but if the founders were bots.
Is this what we can expect in the near future?