While I understand that you could do this, I genuinely don't understand why.
SQLite and Postgres/MySQL/etc. occupy different niches. If you need massive concurrent writes, surely that's what Postgres/MySQL/etc. is for? Their engines are built around that from the ground up.
SQLite is built around a file that stores data for a single application, as opposed to being client-server with many clients. I've used it a ton for that, but the idea that your application would have so many threads needing to write concurrently that it would be slowed down by a file lock... just doesn't make sense to me.
What applications need this, and why wouldn't they use Postgres/MySQL/etc. if they need such a high level of concurrent performance? This feels like trying to adapt a small sports car to tow a semi-trailer. It doesn't seem like it's what it's meant for.
The single-writer limitation in SQLite is per-database, not per-connection. You can shard your SQLite tables into multiple database files and query across all of them from a single connection.
I agree that "the single-writer limitation isn't just a theoretical concern", but it's also solvable without forking SQLite. ulimit's the limit! If your goal is resource maximization of a given computer, though, Postgres is likely a better fit.
Joins and Transactions are a pretty big part of SQL. I'm no expert, but if my quick search results are right, both are lost in the separate file per table scenario.
Even in the “low-performing” zone of the single threaded SQLite and Turso, we are still talking about 50k rows per second and 1000 microseconds, aka 1 ms. It is insanely fast. 1ms is just the round-trip for my Postgres on RDS. It is amazing that SQLite is so awesome. I understand it is not for every use-case, but still awesome
Kind of cool to see work on this. I do hope that the final db file result is still binary compatible with SQLite 3 in whatever direction Turso moves towards though... Rust or not.
I've been advocating with several projects over recent years to get SQLite3 as an archive/export/interchange format for data. Need to archive 2019 data from the database, dump it into a SQLite db with roughly the same schema... Need to pass multiple CSVs worth of data dumps, use a SQLite file instead.
As a secondary, I wonder if it's possible to actively use a SQLite interface against a database file on S3, assuming a single server/instance is the actual active connection.
For read-write it's a terrible idea. Object storage assumes objects are immutable. There may be some support for appends, but modifying the middle of an object in place involves copying the entire thing.
What is on the verge of becoming viable is to use Litestream to do asynchronous replication to S3, and have read replicas that stream the data directly from S3. But what's stored in S3 isn't a database file, but a format created for the purpose called LTX.
> As a secondary, I wonder if it's possible to actively use a SQLite interface against a database file on S3, assuming a single server/instance is the actual active connection.
You could achieve this today using one of the many adapters that turn S3 into a file system, without needing to wait for any SQLite buy in.
TursoDB aims to be fully compatible with sqlite, so files you create with tursodb can be read by sqlite and vice-versa. Like sqlite, turso is in-process, so it runs alongside your application, which is quite different from postgres' client-server architecture.
The only think turso has in common with postgres is mvcc, which is a rather standard concurrency control model across modern databases. Idk if I answered your question :)
Can't use it locally (yet?) but it's definitely an interesting move in the space. For my personal projects lately I've been defaulting to sqlite in dev, and having a database wrapper layer to use something else in prod.
I'm imagining some insane replication behind the scenes, where every write is happening concurrently on a different SQLite DB, and then merged together sequentially into some master DB.
I’m not in a rush to use a reimplementation of SQLite — particularly from startup bros that had a very public, one-sided, and unpleasant fight with SQLite over their contribution model.
D. Richard Hipp is a genuinely fantastic human being, and SQLite is a project developed at literally the level of planetary infrastructure given how broadly and everywhere it appears.
Forking his project while using the name to garner recognition and attention is poor form, and it is difficult to have faith in the results.
The don't use the name. They use Turso. Even the HN title is wrong - the article title doesn't mention SQLite.
They refer to SQLite, but how could you not if that's what you forked from, and that's what has the functionality you're changing. That would be a very weird article if we didn't have that context.
How? The whole point of trademark is to avoid confusing users that an alternate product is the same as the original product.
By explicitly saying "Next evolution of SQLite", or "A fork of SQLite", or a "Better SQLite" .. all of this is saying our product is distinct and different from SQLite.
If the fork were called "nue-sqlite" or "sqlitest" or "fastsqlite", there's an argument to be made.
The issue isn’t that you’re mentioning SQLite or acknowledging that it’s a fork.
The problem is that the phrase “the next evolution of SQLite” conveys continuity and endorsement.
A reasonable reader could conclude that Turso is an official successor or the next release of SQLite — that it represents the official lineage of the project.
Phrasing like “SQLite-compatible,” or “a fork of SQLite” would be clear and factual.
Calling it “the next evolution of SQLite” isn’t factual; it’s marketing positioning, and it implies ownership of SQLite’s identity and lineage.
This reflects a broader pattern in how the fork has been presented publicly.
The messaging often treats the original project as something to be fixed rather than a foundation to be respected.
Referring to Turso as a product while leveraging the SQLite name reinforces that framing — co-opting a public-domain engineering gift into a commercial asset.
* You are right not to rush. You should keep using SQLite until Turso matures. Some use cases are more tolerant to new tech than others. It will take time for us to reach the level of trust SQLite has for broad use cases, but we are hoping to add value for some use cases right away. Never rush, tech matters!
* I have never met Hipp, but only heard great things about him.
* We never had a fight with SQLite over their contribution model (or about anything for that matter, I never even met Hipp or anybody else from SQLite). We just disagree with it - in the sense that we believe in different things. We don't think what they do is fundamentally wrong. Different projects take different paths.
* We are not using the SQLite name. We compare ourselves to SQLite because we are file and API compatible, and we do aspire to raise the very high bar they have set. It is hard to do this without drawing the comparison, but we are a different project and state it very clearly. I am not a lawyer (and neither you seem to be), but we believe we are doing is okay. If we ever have any valid reason to believe we crossed a line here, we will of course change course.
* We are not "startup bros". We spent 20+ years of our lives building databases and operating systems.
I get where you're coming from, but isnt the whole idea of open source "if you dont like the approach, you're free to fork the code and do it the way you think is right?"
As long as the fork doesnt violate trademark (turso vs sqlite) it is working-as-intended?
I, for one, encourage this kind of behavior. We should have more forks. More forks = more competition = better results for everyone.
---
To make an analogy. Would you say the same thing if this were a for-profit company?
"I cant believe someone else is competing the same space as $x. $x is hugely successful, and so many people use it. I dont know why there's an alternative"
I'd still give them the benefit of the doubt as most (all?) of the contributors are Finns who you can almost guarantee to have a "no bullshit" type of mentality to essentially everything. And what I'd guess, the team most probably has quite an academic background because of the local culture.
While I understand that you could do this, I genuinely don't understand why.
SQLite and Postgres/MySQL/etc. occupy different niches. If you need massive concurrent writes, surely that's what Postgres/MySQL/etc. is for? Their engines are built around that from the ground up.
SQLite is built around a file that stores data for a single application, as opposed to being client-server with many clients. I've used it a ton for that, but the idea that your application would have so many threads needing to write concurrently that it would be slowed down by a file lock... just doesn't make sense to me.
What applications need this, and why wouldn't they use Postgres/MySQL/etc. if they need such a high level of concurrent performance? This feels like trying to adapt a small sports car to tow a semi-trailer. It doesn't seem like it's what it's meant for.
The single-writer limitation in SQLite is per-database, not per-connection. You can shard your SQLite tables into multiple database files and query across all of them from a single connection.
I agree that "the single-writer limitation isn't just a theoretical concern", but it's also solvable without forking SQLite. ulimit's the limit! If your goal is resource maximization of a given computer, though, Postgres is likely a better fit.
Joins and Transactions are a pretty big part of SQL. I'm no expert, but if my quick search results are right, both are lost in the separate file per table scenario.
Even in the “low-performing” zone of the single threaded SQLite and Turso, we are still talking about 50k rows per second and 1000 microseconds, aka 1 ms. It is insanely fast. 1ms is just the round-trip for my Postgres on RDS. It is amazing that SQLite is so awesome. I understand it is not for every use-case, but still awesome
Don't forget that the SQLite team is working on their own multi-writer mode that blows BEGIN CONCURRENT' out of the water: https://news.ycombinator.com/item?id=34434025
Though this stuff moves slowly (that announcement was almost 3 years ago!), so I'm glad to see Turso giving us options today.
Looks like that branch is still under actively development - seven new commits this month: https://sqlite.org/hctree/timeline
Kind of cool to see work on this. I do hope that the final db file result is still binary compatible with SQLite 3 in whatever direction Turso moves towards though... Rust or not.
I've been advocating with several projects over recent years to get SQLite3 as an archive/export/interchange format for data. Need to archive 2019 data from the database, dump it into a SQLite db with roughly the same schema... Need to pass multiple CSVs worth of data dumps, use a SQLite file instead.
As a secondary, I wonder if it's possible to actively use a SQLite interface against a database file on S3, assuming a single server/instance is the actual active connection.
SQLite against S3 can work with some clever tricks. The neatest version of that I've seen is still this WebAssembly one: https://phiresky.github.io/blog/2021/hosting-sqlite-database...
I also got sqlite-s3vfs working from Python a few months ago: https://simonwillison.net/2025/Feb/7/sqlite-s3vfs/
Both of these are very much read-only mechanisms though.
SQLite directly against S3 is workable if you mean querying a read-only database.
For example, from Go, you could use my driver, and point it to a database file stored in S3 using this: https://pkg.go.dev/github.com/ncruces/go-sqlite3/vfs/readerv...
For read-write it's a terrible idea. Object storage assumes objects are immutable. There may be some support for appends, but modifying the middle of an object in place involves copying the entire thing.
What is on the verge of becoming viable is to use Litestream to do asynchronous replication to S3, and have read replicas that stream the data directly from S3. But what's stored in S3 isn't a database file, but a format created for the purpose called LTX.
> As a secondary, I wonder if it's possible to actively use a SQLite interface against a database file on S3, assuming a single server/instance is the actual active connection.
You could achieve this today using one of the many adapters that turn S3 into a file system, without needing to wait for any SQLite buy in.
Can someone explain what "ecological niche" this new Turso DB occupies in between SQLite and Postgres?
TursoDB aims to be fully compatible with sqlite, so files you create with tursodb can be read by sqlite and vice-versa. Like sqlite, turso is in-process, so it runs alongside your application, which is quite different from postgres' client-server architecture.
The only think turso has in common with postgres is mvcc, which is a rather standard concurrency control model across modern databases. Idk if I answered your question :)
Can't use it locally (yet?) but it's definitely an interesting move in the space. For my personal projects lately I've been defaulting to sqlite in dev, and having a database wrapper layer to use something else in prod.
why not? Turso is a fully local database, you can just download the shell and use it as you would use sqlite.
> Can't use it locally
I'm imagining some insane replication behind the scenes, where every write is happening concurrently on a different SQLite DB, and then merged together sequentially into some master DB.
I’m not in a rush to use a reimplementation of SQLite — particularly from startup bros that had a very public, one-sided, and unpleasant fight with SQLite over their contribution model.
D. Richard Hipp is a genuinely fantastic human being, and SQLite is a project developed at literally the level of planetary infrastructure given how broadly and everywhere it appears.
Forking his project while using the name to garner recognition and attention is poor form, and it is difficult to have faith in the results.
> Forking his project while using the name
The don't use the name. They use Turso. Even the HN title is wrong - the article title doesn't mention SQLite.
They refer to SQLite, but how could you not if that's what you forked from, and that's what has the functionality you're changing. That would be a very weird article if we didn't have that context.
They state, plainly on their home page:
“The next evolution of SQLite”
That is a material misrepresentation and absolutely trading on the SQLite name
“The next evolution of SQLite”
How? The whole point of trademark is to avoid confusing users that an alternate product is the same as the original product.
By explicitly saying "Next evolution of SQLite", or "A fork of SQLite", or a "Better SQLite" .. all of this is saying our product is distinct and different from SQLite.
If the fork were called "nue-sqlite" or "sqlitest" or "fastsqlite", there's an argument to be made.
The issue isn’t that you’re mentioning SQLite or acknowledging that it’s a fork.
The problem is that the phrase “the next evolution of SQLite” conveys continuity and endorsement.
A reasonable reader could conclude that Turso is an official successor or the next release of SQLite — that it represents the official lineage of the project.
Phrasing like “SQLite-compatible,” or “a fork of SQLite” would be clear and factual.
Calling it “the next evolution of SQLite” isn’t factual; it’s marketing positioning, and it implies ownership of SQLite’s identity and lineage.
This reflects a broader pattern in how the fork has been presented publicly.
The messaging often treats the original project as something to be fixed rather than a foundation to be respected.
Referring to Turso as a product while leveraging the SQLite name reinforces that framing — co-opting a public-domain engineering gift into a commercial asset.
Author of Turso here. Couple of points
* You are right not to rush. You should keep using SQLite until Turso matures. Some use cases are more tolerant to new tech than others. It will take time for us to reach the level of trust SQLite has for broad use cases, but we are hoping to add value for some use cases right away. Never rush, tech matters!
* I have never met Hipp, but only heard great things about him.
* We never had a fight with SQLite over their contribution model (or about anything for that matter, I never even met Hipp or anybody else from SQLite). We just disagree with it - in the sense that we believe in different things. We don't think what they do is fundamentally wrong. Different projects take different paths.
* We are not using the SQLite name. We compare ourselves to SQLite because we are file and API compatible, and we do aspire to raise the very high bar they have set. It is hard to do this without drawing the comparison, but we are a different project and state it very clearly. I am not a lawyer (and neither you seem to be), but we believe we are doing is okay. If we ever have any valid reason to believe we crossed a line here, we will of course change course.
* We are not "startup bros". We spent 20+ years of our lives building databases and operating systems.
I get where you're coming from, but isnt the whole idea of open source "if you dont like the approach, you're free to fork the code and do it the way you think is right?"
As long as the fork doesnt violate trademark (turso vs sqlite) it is working-as-intended?
I, for one, encourage this kind of behavior. We should have more forks. More forks = more competition = better results for everyone.
---
To make an analogy. Would you say the same thing if this were a for-profit company?
"I cant believe someone else is competing the same space as $x. $x is hugely successful, and so many people use it. I dont know why there's an alternative"
I'd still give them the benefit of the doubt as most (all?) of the contributors are Finns who you can almost guarantee to have a "no bullshit" type of mentality to essentially everything. And what I'd guess, the team most probably has quite an academic background because of the local culture.