DEV Community

Cover image for Everyone Is Wrong About SQLite
Shayan
Shayan

Posted on

Everyone Is Wrong About SQLite

Here's a spicy take: SQLite might be the best database choice for your next SaaS project. Yes, really. The same database that powers your browser history could handle your production workload better than that overengineered Postgres cluster you're thinking about.

Before you close this tab in disgust, hear me out. I've watched countless developers dismiss SQLite as a "toy database" while simultaneously struggling with connection pools, replication lag, and $500/month database bills. Meanwhile, companies like Expensify are processing $10 billion in transactions on SQLite.

The problem isn't SQLite. The problem is that we've been thinking about it completely wrong.

The "Lite" Problem

Let's address the elephant in the room: that unfortunate "lite" in the name. It's like naming a sports car "Slow McLaren" – technically accurate in some narrow sense but completely misleading about its actual capabilities.

SQLite isn't "Postgres Lite" or "MySQL for Beginners." It's a different beast entirely. While Postgres is built for client-server architecture with network protocols and connection management, SQLite is an embedded database engine. It's not lighter because it does less – it's lighter because it doesn't need an entire server process, network stack, and authentication system.

Here's what SQLite actually is:

  • The most deployed database engine in the world (billions of instances)
  • Battle-tested in production by companies processing millions of requests
  • Faster than client-server databases for many workloads
  • ACID-compliant with full transaction support
  • Capable of handling databases up to 281TB

But sure, let's keep calling it a toy because of that name.

When SQLite Absolutely Crushes It

Here's where conventional wisdom gets it backwards. SQLite isn't just "good enough" for certain use cases – it's actually the superior choice for many production scenarios:

1. Read-Heavy Workloads

If your app is 95% reads (like most SaaS apps), SQLite will embarrass your Postgres setup. No network roundtrips. No connection overhead. Just direct disk reads with an intelligent page cache. I've seen query times drop from 50ms to 0.5ms just by switching from Postgres to SQLite.

2. Single-Server Deployments

Running everything on one beefy server? SQLite eliminates an entire class of problems:

  • No connection pool exhaustion
  • No network latency
  • No split-brain scenarios
  • No replication lag
  • Backups are literally just copying a file

3. Edge Computing

Deploying to multiple regions? Each edge location can have its own SQLite database. Cloudflare Workers, Fly.io, and similar platforms make this trivial. Your users get sub-10ms response times, and you get a simple architecture.

4. Embedded Analytics

Need to crunch numbers on user data? SQLite can handle complex analytical queries on gigabytes of data. Window functions, CTEs, JSON operations – it's all there. And it's fast because there's no network overhead.

The Real Limitations (Not What You Think)

Let's be honest about where SQLite struggles, because it's not where most people think:

❌ High Write Concurrency (But It's Not That Simple)

SQLite uses a single writer model. One write at a time. But here's what the haters don't tell you: since Write-Ahead Logging (WAL) mode was introduced, SQLite can handle concurrent readers WHILE writing. No more blocking reads during writes.

In WAL mode:

  • Writers don't block readers
  • Readers don't block writers
  • Multiple readers work simultaneously
  • Write performance improved dramatically
-- Enable WAL mode (do this once)
PRAGMA journal_mode=WAL;
Enter fullscreen mode Exit fullscreen mode

With WAL enabled and proper configuration, I've seen SQLite handle 500-1000+ writes/second on modern hardware while serving thousands of concurrent reads. Yes, Postgres can push higher numbers with multiple writers, but ask yourself: is your SaaS really doing more than 1000 writes per second? (Spoiler: it's not.)

❌ Multiple Application Servers

Need horizontal scaling across multiple servers hitting the same database? SQLite wasn't built for this. You'll need Postgres or MySQL. (Though solutions like LiteFS and rqlite are changing this game.)

❌ Complex Access Control

SQLite doesn't have users, roles, or row-level security. Your application handles all authorization. This is actually fine for 99% of SaaS apps where you're checking permissions in code anyway.

✅ But Here's What People Get Wrong:

  • "SQLite can't handle concurrent reads" – Wrong. It handles unlimited concurrent reads.
  • "SQLite doesn't support JSON" – Wrong. Full JSON support since 2015.
  • "SQLite can't do full-text search" – Wrong. FTS5 is excellent.
  • "SQLite databases corrupt easily" – Wrong. It's one of the most reliable storage formats ever created.

The Architecture That Changes Everything

Here's how to think about SQLite in production:

Traditional Architecture:
App Server → Network → Database Server → Disk

SQLite Architecture:
App Server → Disk
Enter fullscreen mode Exit fullscreen mode

That's it. Your database queries are now function calls. Your "connection" is a file handle. Your backup system is cp database.db backup.db.

This simplicity isn't a limitation – it's a superpower. Every component you remove is one that can't fail, can't be misconfigured, and doesn't need monitoring.

The Operations Dream You Didn't Know You Wanted

Let's talk about something that'll make your DevOps team weep with joy: SQLite operations.

Backups? It's Just a File

# Your entire backup strategy
cp production.db backup-$(date +%Y%m%d).db

# Or get fancy with point-in-time recovery using Litestream
# Streams every change to S3 automatically
litestream replicate production.db s3://mybucket/db
Enter fullscreen mode Exit fullscreen mode

That's it. No pg_dump. No coordinating replicas. No worrying about backup consistency. With Litestream, you get continuous replication to S3 with point-in-time recovery. Set it up in 5 minutes and forget about it.

Restoring? Even Easier

# Restore from yesterday
cp backup-20250115.db production.db

# Or restore from S3 with Litestream
litestream restore -o production.db s3://mybucket/db
Enter fullscreen mode Exit fullscreen mode

Compare this to restoring a 50GB Postgres database. I'll wait.

Testing? Use Real Production Data

# Clone prod for testing
cp production.db test.db
# Done. Full production dataset. Zero config.
Enter fullscreen mode Exit fullscreen mode

No sanitizing connection strings. No managing test database servers. No explaining to finance why you need another RDS instance for staging.

Monitoring? What Monitoring?

  • No connection pool metrics
  • No replication lag
  • No long-running query alerts
  • No vacuum schedules
  • No disk space alerts

Just application metrics that actually matter.

Real Production Patterns

Pattern 1: Write-Through Cache

// Instead of complex Redis + Postgres setups
async function getUser(userId: string) {
    return db.get("SELECT * FROM users WHERE id = ?", userId);
    // SQLite *is* your cache with 0ms latency
}
Enter fullscreen mode Exit fullscreen mode

Pattern 2: Per-Tenant Databases

// Each customer gets their own SQLite database
function getCustomerDb(customerId: string) {
    return new Database(`./data/customers/${customerId}.db`);
    // Perfect isolation, easy backups, simple compliance
}
Enter fullscreen mode Exit fullscreen mode

Pattern 3: Hybrid Architecture

// SQLite for reads, Postgres for writes
// Stream changes from Postgres to SQLite replicas
// 99% of queries hit SQLite, 1% hit Postgres
const read = async (query: string) => sqlite.get(query);
const write = async (query: string) => postgres.execute(query);
Enter fullscreen mode Exit fullscreen mode

The Money Shot: When to Use SQLite

Use SQLite when:

  • You're read-heavy (>90% reads)
  • You can fit on one server (up to 100GB RAM and 1TB storage is cheap now)
  • You value simplicity and reliability
  • You want to minimize operational overhead
  • You're building embedded or edge applications
  • You need consistent sub-millisecond query times

Don't use SQLite when:

  • You need high write concurrency (>1000 writes/second sustained)
  • You require multiple writers from different servers
  • You need built-in replication (though LiteFS exists)
  • You need database-level access control

The Plot Twist Nobody Talks About

Here's the dirty secret: most startups using Postgres would be better off with SQLite. They're paying for distributed system complexity they don't need while getting worse performance than a simple SQLite setup would provide.

I've migrated several production systems from Postgres to SQLite. Results:

  • 10x faster queries (no network hop)
  • 90% reduction in operational complexity
  • $500/month saved on managed database costs
  • Backup/restore times dropped from hours to seconds
  • Zero downtime from connection pool issues

Stop Thinking Client-Server

The biggest mental shift is this: stop thinking of SQLite as a database server. It's not. It's a library that happens to implement a SQL database. Once you internalize this, everything clicks.

You wouldn't spin up a separate server for your JSON parser. You wouldn't create a connection pool for your regex engine. So why do it for your database when SQLite can handle your workload as a library?

The Future Is Already Here

Major platforms are betting big on SQLite:

  • Cloudflare's Durable Objects use SQLite
  • Fly.io's LiteFS enables distributed SQLite
  • Turso is building a distributed SQLite platform
  • Even Rails is pushing SQLite as a production default

These aren't toy projects. They're production infrastructure serving billions of requests.

Your Move

Before you default to Postgres for your next project, ask yourself:

  1. Will you really have >1000 sustained writes per second?
  2. Do you actually need multiple application servers writing to the same database?
  3. Is the operational complexity worth it for your use case?

If you answered "no" to any of these, give SQLite a serious look. Not as a stepping stone to a "real" database, but as your production database.

The "lite" in SQLite doesn't mean it's less capable. It means it carries less baggage. And in production, less baggage means more speed, more reliability, and more sleep.

Stop overthinking it. Start with SQLite. You can always add complexity later if you actually need it. Chances are, you won't.

A Note From Someone Who Chose Postgres (And Why)

I build UserJot, a feedback management platform. We use Postgres, and for good reasons: we need multi-server scalability, rely heavily on pgvector for semantic search, use LISTEN/NOTIFY for real-time updates, and run complex analytical queries across millions of data points.

UserJot Dashboard

But I've also built a blogging platform entirely on SQLite. Single binary deployment, Litestream replication to S3, scaled beautifully on a $5/month VPS. It was a joy to operate precisely because SQLite eliminated all the complexity.

The lesson? I chose Postgres for UserJot because our specific needs demanded it. For the blogging platform, SQLite was the obvious choice. Most projects are more like the blogging platform than they are like UserJot.

Choose boring technology, but choose it deliberately.

Top comments (50)

Collapse
 
xwero profile image
david duymelinck

You wouldn't spin up a separate server for your JSON parser.

Having separate servers for the application and the database is mainly because security concerns and different server requirement needs. It has less to do with what flavour of database you choose.
Also Sqlite has no build-in authentication. This is a potential security risk.

Sqlite is certainly a good sql database, and it is great the nice parts are getting promoted.
Just be aware it is a tool like any other, it comes with flaws too.

Collapse
 
yawaramin profile image
Yawar Amin

Sqlite has no build-in authentication. This is a potential security risk.

Potentially, in the sense that if you don't give your database file the correct ownership and permissions on disk, it could be read by other users on the machine. Or if you let your machine accept incoming connections that can read files on disk, and can access the SQLite database file, they can potentially read your data.

But these 'security risks' are at a level above SQLite itself–you can have the same security risks whether you're using SQLite or not, and you should be locking down file permissions and network connections on production machines anyway. Calling this a risk with SQLite is calling being on a leaky boat and pointing out that there's a leaky bucket in the boat.

Collapse
 
xwero profile image
david duymelinck

You are right about the file permissions. That is just the first step in the sqlite security plan. But it is not the security risk I'm hinting at.

Because it is not needed to authenticate, there is no way of tracking who corrupted the data if that happens.
Most database systems also come with permissions. This improves the security even more.

I saw there is encryption for the file and there is an authentication extension. So it is possible to get to almost the same level as other databases. But when people are promoting it as an one on one replacement, many who read those posts are going to be missing out on that information.

Thread Thread
 
rdentato profile image
Remo Dentato • Edited

This is how I see it: since SQLlite is a library and not a DB like MySQL or Postgres, you don't need the same type of authentication.
With a traditional DB, you have to take care of the actual files that keep the data plus the users of the DB itself (which rarely are also the users of your application).

╭──────────────╮   ╭──────────────╮
│ Application  │   │ Application  │
│ (Users Auth) │   │ (Users Auth) │
╰──────────────╯   │              │
╭──────────────╮   │      +       │
│   Database   │   │   SQLlite    │
│  (DB Auth)   │   │              │
╰──────────────╯   ╰──────────────╯
╭──────────────╮   ╭──────────────╮
│   DB files   │   │ SQLite files │
│(permissions) │   │(permissions) │
╰──────────────╯   ╰──────────────╯

Enter fullscreen mode Exit fullscreen mode

In the picture, on the left the use of a traditional DB. You have to worry about user authentication, to avoid somebody could access someone else's data through your applications, and DB Authentication, to avoid that somebody could connect to your DB instance and get the data.
With SQLite your application is the one controlling data access so all the security logic has to be implemented there, not in SQLite.

The issue on keeping the access to the data files themselves private is common to both, of course.

Thread Thread
 
xwero profile image
david duymelinck

since SQLlite is a library and not a DB like MySQL or Postgres, you don't need the same type of authentication.

Why wouldn't you need the same authentication, security, when you are storing the same data in both databases.

Most applications have web content data, and that is what I call low level security data.
And on the other hand there is high level security data, for example with e-commerce; prices, user data, orders, payments.
For the low level security data I think Sqlite is fine. For the other data type I want that extra layer of security "client/server" databases offer out of the box.
But who is going to set up two databases from the start? So in production the more secure database is a better option, even to store the low level security data.

Maybe it is time to consider a multi database setup as the default?

Thread Thread
 
yawaramin profile image
Yawar Amin

In the post:

SQLite doesn't have users, roles, or row-level security. Your application handles all authorization. This is actually fine for 99% of SaaS apps where you're checking permissions in code anyway.

This is called out in its own section. If people are going to read the post and then ignore this specific warning, they have bigger problems than security.

Thread Thread
 
xwero profile image
david duymelinck

That is a confusing paragraph, because it mixes application users and permissions and database users and permissions. Their tasks are different.

From the tone it could be interpreted as database users and permissions are not needed at all.

Thread Thread
 
yawaramin profile image
Yawar Amin

They're not needed at all, because the database is a file and it is opened by a single user–the application that uses it. In fact database users and permissions fundamentally don't make sense in SQLite–it's a library running in the same process as the application itself; it's not a separate server that is called by clients. You should read up a bit more on SQLite because you are fundamentally misunderstanding how it works.

Thread Thread
 
xwero profile image
david duymelinck

So you are saying that a password protected zipfile with a document in it, sqlite, is as secure as a password protected zipfile with a password protected document, other sql databases. Because the document is a pdf instead of word.

My main concern in this thread is data security. I just can't wrap my head around the fact you are reducing security measures.

From a security standpoint it is a best practice to run different applications, web application and database in this case, with different OS users. This reduces the risks when one user is compromised.
Having a database user that needs to log in to read and manipulate data adds an extra layer of security.

And this is what you want to remove from all your data, because of the database application you prefer? This is the tone of the post, replace the "heavy" database with a fast and easy one.
As I mentioned before I see the benefit for certain data, but not for all.

It feels like good decisions have been made in the past. But the reasoning behind the decisions is forgotten, and that is why mistakes from the past are going to be repeated.

Thread Thread
 
yawaramin profile image
Yawar Amin

If the application server's user is compromised, even with a separate database server running with a different user, it already gives the attacker access to the database, because they can just log in as the application server user and get the credentials to log in to the database. Your extra security layer is not actually adding any extra security.

Looking at it another way, you are effectively saying that an application server can never use local files on disk as any kind of data store for security reasons. This is obviously an absurd argument.

Thread Thread
 
xwero profile image
david duymelinck

and get the credentials to log in to the database.

Sure it is not a complex security measure to circumvent, but it is a bump in the road attackers need to overcome to get to the data. And that time span could be enough to change the password and kill current connections.

an application server can never use local files on disk as any kind of data store.

Now you are just twisting my words. It is ok for public data, as I mentioned before.

Thread Thread
 
yawaramin profile image
Yawar Amin

it is a bump in the road attackers need to overcome to get to the data. And that time span

Realistically, that's going to be milliseconds. Attackers are not manually rooting around after logging in, they are just running scripts that automate all the attacks for them. If they already gained login access, you have much bigger problems to worry about than how many more steps it takes to the database.

It is ok for public data,

It is generally understood that an application's data store is part of its persistence layer; it does not typically refer to publicly available data.

Thread Thread
 
xwero profile image
david duymelinck

they are just running scripts that automate all the attacks for them.

Just like the counter actions are scripted upon attacker detection. Did you really think it would be a manual response?

Thread Thread
 
yawaramin profile image
Yawar Amin

Your security model relies on the database server somehow being more impregnable against attackers than the application server (ie attackers would try to get the data by compromising the latter and then indirectly access the former, rather than just directly accessing the former). The question that naturally arises then is that if the database server can be made so secure, why can't the application server itself? They are both just computers, there's no magic here.

Imho, this specific model doesn't make sense. It's basically security theatre. Obviously, database servers themselves having some kind of access control mechanism makes sense because they need to be accessed securely from external systems. SQLite files don't.

Thread Thread
 
xwero profile image
david duymelinck

if the database server can be made so secure, why can't the application server itself

The application server is the public facing server, so that is the obvious attack vector.
When you rob a bank it is easier to go in the front door than tunneling into the vault.
You don't see a bank that is displaying their gold bars in the public area.

The database server is more secure because it is not public. And yet it still has database user accounts, on top of things like IP whitelisting.

I'm sure you have done, and know, all the things I mentioned. And you call it security theater for the purpose of wanting to use a sql database flavor that provides less security features as a one on one replacement.

A scenario where I would feel comfortable to put all data in sqlite is an internal service, in a micro services configuration.
I'm not against using sqlite in production, just be smart about it

Collapse
 
nevodavid profile image
Nevo David

damn, the amount of hassle i could’ve skipped with simpler setups like this kinda messes with me tbh - you think most people pick the bigger tools just outta habit or is it low-key fear of missing out on features they’ll never use?

Collapse
 
janmpeterka profile image
Jan Peterka

For me personally it's the first.

For some random reasons, first database I encountered was MySQL (it was probably WAMP at the time), and since then it is what I would default to (well, MariaDB mostly these days). If I start a new project, I will go for tools I'm comfortable with, as I don't want to spend time on solving issues related to tools, not the app itself. So I need to be pushed a bit to something new:

  • Rails 8 defaults to SQLite, so for my new projects I use SQLite as it's actually less work from the start!
  • Same with authentication - as Rails 8 added authentication generator, it made it easier then ever to ditch old overcomplicated authentication gem (library) I would default to in new projects.
  • Sometimes I try to find myself some fun new learning project where I will try somethink new to get a feel for that (but I don't do that often enough).

Last few years, I'm trying to actively push for "as simple solution as possible", strongly influenced by mindset of "not being able to scale from the defaults (in my case mostly Rails+SQLite) is a great problem to have (as it indicates you have very successful product/bussiness)". So I don't worry about that

Collapse
 
code42cate profile image
Jonas Scholz

we need sqlheavy

Collapse
 
shayy profile image
Shayan

thats postgres for you

Collapse
 
nathan_tarbert profile image
Nathan Tarbert

this is extremely impressive, i’ve had so many pointless arguments about sqlite and this lays it all out in a way that actually makes sense
you think the “lite” in sqlite scares people into overthinking their needs

Collapse
 
kurtharriger profile image
Kurt Harriger

There are at least three reasons, not mentioned in your post, to still use a database server:

  1. You need to run more than one server.

Hardware costs do not scale linearly and it can be more expensive to run a single large node instead of many small nodes. But yes, distributed systems are complex and that too has a high cost maybe more so than a large node. However…

  1. Your server will experience hardware failures.

As hard as they tried to make servers redundant with raid disk arrays and redundant power systems etc. Hardware will fail. Disks used to fail every couple years and ssd performance degrades quickly under high use and databases are hard on storage systems and require replacement frequently.

However, its not as painful as it used to be. Cloud providers virtualize nearly everything. Database servers are attached to storage systems where disks are swapped frequently without any impact to database performance. Even if the entire machine fails the cloud provider will move the VM to new hardware automatically within minutes and all you see is an unexpected server reboot.

For full redundancy however you need a second server running in another building with different network switches and power generators cooling systems etc. AWS and other vendors provide multiple data centers and availability zones because individual zones do periodically experience hardware and other failures that result in the unavailability of resources in that zone. Many organizations do choose not to implement multi-az deployments or do so incorrectly and when an AWS zone goes down a lot of the internet goes down with it.

If you are running on a cloud provider they will often resolve the issue in a few hours and waiting it out may or may not be an acceptable trade off, but if you need to credit customers when you fail to meet SLAs don't expect any credits from AWS. Even still 99% availability translates to bit over 7 hours per month so if hardware was the only concern you are probably fine. However...

  1. Software needs to be updated and patched.

Many software upgrades will require restarting services and sometimes the entire OS. Even a small team can easily push several changes per day and if each of these changes requires a bit of downtime users are going to notice. Furthermore some of these deployments are going to fail and require a rollbacks.

The more teams you have developing on a shared monolith the more daily deployments you will have and the riskier they will be. Monoliths bring their own development challenges, developers can no longer understand the entire code base breaking parts of the code they don't understand, and local builds often take several minutes (some monoliths even take hours to build) drastically slowing the development feedback loop.

Database servers also need updated but far less than application code. Using symlinks or docker containers today you can perform application updates in place within a few seconds, but for OS or database upgrades the safest option is still a blue/green deployment where an entirely new server is deployed with the new software, validated, and then the dns/load balancers updated to the new servers. If your data is hosted on the old server this becomes massively more complicated. You will need to stop writes to the old server and copy the database to the new server before you can switch traffic to the new server. If your database is large this can take a painfully long time from several minutes to several hours. If you deploy your database on a different node you can deploy new application nodes without replicating your database for each deployment.


That said I mostly agree with you. If your organization is a single team and you have no SLAs a monolithic application on a single server is a great idea. Simplicity is undervalued and you should try to run a monolith on a single server as long as you can.

However, if you have a SLA or multiple teams doing regular deployments, you will need a database server. And running a database server will allow you to: scale horizontally, tolerate ate hardware failures, preform blue green deployments, and deploy micro services instead of monoliths.

Collapse
 
dotallio profile image
Dotallio

Honestly, so many teams overbuild their infra instead of starting with something like SQLite – simplicity wins big early on.
Anyone here actually run into real limits with SQLite before needing to switch?

Collapse
 
chris_the_jr_dev profile image
Chris

I agree heavily with this take. I have projects and apps that work off SQLite and are very performative. I use Fly.io also and have not looked into the LiteFS feature yet, I will have to check this out.

Collapse
 
yawaramin profile image
Yawar Amin

Great post. I agree that more people should give SQLite a shot. I just want to point out two things that I feel should be clarified.

Expensify does use a single SQLite database for their production system (or at least did the last time they talked about it publicly), but the key here is that they built a distributed replication system on top of it to handle multiple writers: Bedrock bedrockdb.com/

The other thing I want to point out is that SQLite is great when the service is reading/writing an SQLite database on the local disk, but it's too risky when it's reading/writing an SQLite file on a network mount, like NFS. On most cloud-based systems, NFS is what's available. Local disk is not available. Of course that's not true on traditional-style rented VMs–those do have local disks. So if that's what you're using, you are fine.

Collapse
 
sharkhead profile image
Allen

I use SQLite as my blog database. It's good, stable, and cheap.

The fun part is I run my blog on AWS serverless, and some people may know serverless doesn't provide permanent storage, so where do I put my SQLite file?

I use NFS to store my SQLite file, serverless will mount NFS first, then dealing with requests.

The best thing of this stack is I don't have to maintain any part of it. I don't have VM, and I don't use RDS (very expensive!).

So... I ❤️ SQLite.

Collapse
 
jesterly profile image
jesterly

Great write-up 👏

use LISTEN/NOTIFY for real-time updates

In many cases, you can get away with using Durable Objects + WebSockets.

We also built a live support app and it was much simpler to code and manage than Firebase Firestore.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.