SQLite: The Complete Guide for 2026

February 12, 2026

SQLite is the most widely deployed database in the world. It runs on every smartphone, every web browser, most operating systems, and billions of embedded devices. Unlike PostgreSQL or MySQL, SQLite is not a client-server database. It is a library that reads and writes directly to a single file on disk. This simplicity is its greatest strength: zero configuration, zero administration, zero dependencies.

This guide covers everything you need to know about SQLite: from basic operations and data types to advanced features like JSON support, full-text search with FTS5, window functions, WAL mode for concurrency, and production deployment patterns. Whether you are building a mobile app, prototyping a web service, or using SQLite as your production database, this guide will give you the knowledge to use it effectively.

⚙ Try it: Format your SQL queries with our SQL Formatter or keep our SQL Basics Cheat Sheet handy for quick reference.

Table of Contents

  1. What Is SQLite and Why It Is Everywhere
  2. When to Use SQLite vs PostgreSQL/MySQL
  3. Getting Started with the sqlite3 CLI
  4. Data Types and Type Affinity
  5. Creating Tables, Constraints, Foreign Keys
  6. CRUD Operations
  7. Indexes: B-tree, Covering, and Partial
  8. JSON Support
  9. Full-Text Search with FTS5
  10. Window Functions
  11. CTEs and Recursive CTEs
  12. WAL Mode and Concurrent Access
  13. PRAGMA Commands
  14. Backup and Migration
  15. SQLite with Python
  16. SQLite with Node.js
  17. Performance and Query Optimization
  18. SQLite Extensions
  19. Production Patterns
  20. Common Pitfalls and Troubleshooting
  21. Frequently Asked Questions

What Is SQLite and Why It Is Everywhere

SQLite is a C library that implements a self-contained, serverless, zero-configuration SQL database engine. Created by D. Richard Hipp in 2000, it stores an entire database in a single cross-platform file. There is no separate server process, no configuration file, and no network protocol. Your application links against the SQLite library and reads/writes the database file directly.

The numbers are staggering. SQLite is embedded in:

There are estimated to be over one trillion active SQLite databases in the world. It is the most tested software library in existence, with 100% branch test coverage and millions of test cases.

When to Use SQLite vs PostgreSQL/MySQL

SQLite is not a replacement for PostgreSQL or MySQL. It serves a different purpose. Here is when to choose each:

Choose SQLite when:

Choose PostgreSQL/MySQL when:

Getting Started with the sqlite3 CLI

SQLite ships with a command-line tool called sqlite3. It is available on most systems or can be installed easily.

# Install on Ubuntu/Debian
sudo apt install sqlite3

# Install on macOS (already included, or via Homebrew)
brew install sqlite

# Create a new database (or open an existing one)
sqlite3 myapp.db

# Common dot-commands inside the sqlite3 shell
.help              -- Show all dot-commands
.tables            -- List all tables
.schema users      -- Show the CREATE statement for a table
.headers on        -- Show column headers in output
.mode column       -- Format output as aligned columns
.mode json         -- Output results as JSON
.quit              -- Exit the shell
# Create a database and run SQL directly from the command line
sqlite3 myapp.db "CREATE TABLE users (id INTEGER PRIMARY KEY, name TEXT, email TEXT UNIQUE);"
sqlite3 myapp.db "INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com');"
sqlite3 myapp.db "SELECT * FROM users;"

# Import CSV data
sqlite3 myapp.db
.mode csv
.import data.csv users

# Export to CSV
.headers on
.mode csv
.output export.csv
SELECT * FROM users;
.output stdout

Data Types and Type Affinity

SQLite uses a dynamic type system that is fundamentally different from PostgreSQL or MySQL. Instead of rigid column types, SQLite uses type affinity. Any column can store any type of value. The declared type is a hint, not a constraint.

SQLite has five storage classes:

-- Type affinity rules: SQLite maps declared types to affinities
-- INTEGER affinity: INT, INTEGER, TINYINT, SMALLINT, BIGINT
-- TEXT affinity: TEXT, VARCHAR, CLOB, CHARACTER
-- REAL affinity: REAL, DOUBLE, FLOAT
-- NUMERIC affinity: NUMERIC, DECIMAL, BOOLEAN, DATE, DATETIME
-- BLOB affinity: BLOB (or no type declared)

-- You CAN insert a string into an INTEGER column:
CREATE TABLE demo (val INTEGER);
INSERT INTO demo VALUES ('hello');  -- Works! Stores as TEXT.
SELECT typeof(val), val FROM demo;  -- Returns: text, hello

-- Use STRICT tables (SQLite 3.37+) to enforce types:
CREATE TABLE users (
    id INTEGER PRIMARY KEY,
    name TEXT NOT NULL,
    age INTEGER
) STRICT;

INSERT INTO users VALUES (1, 'Alice', 'not a number');
-- Error: cannot store TEXT value in INTEGER column

Creating Tables, Constraints, Foreign Keys

-- Basic table with constraints
CREATE TABLE users (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    name TEXT NOT NULL,
    email TEXT UNIQUE NOT NULL,
    status TEXT DEFAULT 'active' CHECK (status IN ('active', 'inactive', 'banned')),
    created_at TEXT DEFAULT (datetime('now'))
);

-- Table with foreign key
CREATE TABLE posts (
    id INTEGER PRIMARY KEY AUTOINCREMENT,
    user_id INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
    title TEXT NOT NULL,
    body TEXT,
    published_at TEXT
);

-- Composite primary key
CREATE TABLE post_tags (
    post_id INTEGER REFERENCES posts(id) ON DELETE CASCADE,
    tag_id INTEGER REFERENCES tags(id) ON DELETE CASCADE,
    PRIMARY KEY (post_id, tag_id)
);

-- IMPORTANT: Foreign keys are OFF by default in SQLite!
PRAGMA foreign_keys = ON;  -- Must enable per connection

Note on AUTOINCREMENT: In SQLite, INTEGER PRIMARY KEY already auto-increments. Adding AUTOINCREMENT guarantees IDs are never reused (even after deletion), but adds slight overhead. For most applications, INTEGER PRIMARY KEY without AUTOINCREMENT is sufficient.

CRUD Operations

-- INSERT
INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com');

-- Insert multiple rows
INSERT INTO users (name, email) VALUES
    ('Bob', 'bob@example.com'),
    ('Charlie', 'charlie@example.com');

-- INSERT OR REPLACE (upsert by primary key or unique constraint)
INSERT OR REPLACE INTO users (id, name, email)
VALUES (1, 'Alice Updated', 'alice@example.com');

-- INSERT with ON CONFLICT (SQLite 3.24+)
INSERT INTO users (name, email) VALUES ('Alice', 'alice@example.com')
ON CONFLICT(email) DO UPDATE SET name = excluded.name;

-- SELECT
SELECT id, name, email FROM users WHERE status = 'active' ORDER BY name LIMIT 10;

-- SELECT with LIKE and GLOB
SELECT * FROM users WHERE name LIKE 'A%';      -- Case-insensitive for ASCII
SELECT * FROM users WHERE name GLOB 'A*';      -- Case-sensitive, Unix-style

-- UPDATE
UPDATE users SET status = 'inactive' WHERE id = 3;
UPDATE users SET name = 'Alice Smith', email = 'asmith@example.com' WHERE id = 1;

-- DELETE
DELETE FROM users WHERE status = 'banned';
DELETE FROM posts WHERE published_at IS NULL AND created_at < datetime('now', '-30 days');

-- RETURNING clause (SQLite 3.35+)
INSERT INTO users (name, email) VALUES ('Diana', 'diana@example.com') RETURNING id, name;
UPDATE users SET status = 'inactive' WHERE id = 5 RETURNING *;
DELETE FROM users WHERE id = 10 RETURNING id, name;

Indexes: B-tree, Covering, and Partial

SQLite uses B-tree indexes exclusively. Every table is stored as a B-tree (the "table B-tree"), and each index is a separate B-tree that maps indexed column values to row IDs.

-- Basic index
CREATE INDEX idx_users_email ON users (email);

-- Composite index (column order matters!)
CREATE INDEX idx_posts_user_date ON posts (user_id, published_at);
-- Helps: WHERE user_id = 1 AND published_at > '2026-01-01'
-- Helps: WHERE user_id = 1 (uses first column)
-- Does NOT help: WHERE published_at > '2026-01-01' (skips first column)

-- Unique index
CREATE UNIQUE INDEX idx_users_email_unique ON users (email);

-- Partial index: only index rows matching a condition (SQLite 3.8.0+)
CREATE INDEX idx_active_users ON users (email) WHERE status = 'active';
-- Smaller index, faster queries that always filter by status = 'active'

-- Covering index: includes all columns the query needs
CREATE INDEX idx_posts_covering ON posts (user_id, published_at, title);
-- SELECT title FROM posts WHERE user_id = 1 ORDER BY published_at
-- Answered entirely from the index without touching the table

-- Expression index
CREATE INDEX idx_users_lower_name ON users (lower(name));
-- Enables: SELECT * FROM users WHERE lower(name) = 'alice'

-- Drop an index
DROP INDEX IF EXISTS idx_users_email;

JSON Support

SQLite has built-in JSON functions since version 3.9.0. You can store JSON as TEXT, then query and manipulate it with dedicated functions. The -> and ->> operators (SQLite 3.38.0+) provide shorthand access.

-- Store JSON data
CREATE TABLE events (
    id INTEGER PRIMARY KEY,
    data TEXT NOT NULL  -- JSON stored as text
);

INSERT INTO events (data) VALUES ('{"type":"click","page":"/home","user_id":42}');
INSERT INTO events (data) VALUES ('{"type":"purchase","item":"widget","amount":29.99}');

-- Extract values with json_extract()
SELECT json_extract(data, '$.type') AS event_type,
       json_extract(data, '$.page') AS page
FROM events;

-- Shorthand operators (-> returns JSON, ->> returns text)
SELECT data ->> '$.type' AS event_type,
       data ->> '$.amount' AS amount
FROM events;

-- Validate JSON
SELECT json_valid('{"key": "value"}');  -- 1 (valid)
SELECT json_valid('not json');          -- 0 (invalid)

-- Modify JSON
SELECT json_set('{"a":1}', '$.b', 2);           -- {"a":1,"b":2}
SELECT json_remove('{"a":1,"b":2}', '$.b');      -- {"a":1}
SELECT json_insert('{"a":1}', '$.b', 'hello');   -- {"a":1,"b":"hello"}

-- Iterate over JSON arrays with json_each()
SELECT value FROM json_each('[10, 20, 30, 40]');
-- Returns: 10, 20, 30, 40

-- Query JSON arrays in stored data
CREATE TABLE users_v2 (id INTEGER PRIMARY KEY, profile TEXT);
INSERT INTO users_v2 VALUES (1, '{"name":"Alice","tags":["python","sql","devops"]}');

SELECT u.id, tags.value AS tag
FROM users_v2 u, json_each(u.profile ->> '$.tags') AS tags;

-- Index JSON fields with expression indexes
CREATE INDEX idx_events_type ON events (json_extract(data, '$.type'));

Full-Text Search with FTS5

FTS5 is SQLite's built-in full-text search engine. It creates an inverted index that enables fast text searching across large datasets, similar to Elasticsearch but embedded in your application.

-- Create an FTS5 virtual table
CREATE VIRTUAL TABLE articles USING fts5(title, body, tags);

-- Insert data (same as regular INSERT)
INSERT INTO articles (title, body, tags)
VALUES ('Getting Started with SQLite', 'SQLite is a lightweight database...', 'database sql tutorial');

INSERT INTO articles (title, body, tags)
VALUES ('Python Web Development', 'Flask and Django are popular frameworks...', 'python web flask django');

-- Basic search with MATCH
SELECT * FROM articles WHERE articles MATCH 'sqlite';

-- Boolean operators
SELECT * FROM articles WHERE articles MATCH 'python AND web';
SELECT * FROM articles WHERE articles MATCH 'sqlite OR database';
SELECT * FROM articles WHERE articles MATCH 'python NOT django';

-- Phrase search
SELECT * FROM articles WHERE articles MATCH '"web development"';

-- Prefix search
SELECT * FROM articles WHERE articles MATCH 'data*';  -- matches database, data, etc.

-- Column-specific search
SELECT * FROM articles WHERE articles MATCH 'title:sqlite';
SELECT * FROM articles WHERE articles MATCH 'tags:python';

-- Rank results by relevance with bm25()
SELECT *, rank FROM articles WHERE articles MATCH 'database'
ORDER BY rank;  -- Lower rank = more relevant

-- Highlight matching terms
SELECT highlight(articles, 0, '<b>', '</b>') AS title,
       snippet(articles, 1, '<b>', '</b>', '...', 32) AS excerpt
FROM articles WHERE articles MATCH 'sqlite';

Window Functions

SQLite supports window functions since version 3.25.0. They perform calculations across related rows without collapsing them, unlike GROUP BY.

-- ROW_NUMBER: assign sequential numbers
SELECT name, score,
    ROW_NUMBER() OVER (ORDER BY score DESC) AS rank
FROM students;

-- RANK and DENSE_RANK
SELECT name, department, salary,
    RANK() OVER (PARTITION BY department ORDER BY salary DESC) AS dept_rank,
    DENSE_RANK() OVER (PARTITION BY department ORDER BY salary DESC) AS dense_rank
FROM employees;

-- LAG and LEAD: access previous/next rows
SELECT date, revenue,
    LAG(revenue) OVER (ORDER BY date) AS prev_day,
    revenue - LAG(revenue) OVER (ORDER BY date) AS daily_change
FROM daily_sales;

-- Running total
SELECT date, amount,
    SUM(amount) OVER (ORDER BY date ROWS UNBOUNDED PRECEDING) AS running_total
FROM transactions;

-- Moving average (last 7 entries)
SELECT date, value,
    AVG(value) OVER (ORDER BY date ROWS BETWEEN 6 PRECEDING AND CURRENT ROW) AS moving_avg_7
FROM metrics;

-- NTILE: divide rows into N equal groups
SELECT name, score,
    NTILE(4) OVER (ORDER BY score DESC) AS quartile
FROM students;

CTEs and Recursive CTEs

-- Basic CTE
WITH active_posts AS (
    SELECT * FROM posts WHERE published_at IS NOT NULL
)
SELECT u.name, COUNT(ap.id) AS post_count
FROM users u
LEFT JOIN active_posts ap ON u.id = ap.user_id
GROUP BY u.id;

-- Multiple CTEs
WITH
    monthly_stats AS (
        SELECT strftime('%Y-%m', created_at) AS month, COUNT(*) AS signups
        FROM users GROUP BY 1
    ),
    running AS (
        SELECT month, signups,
            SUM(signups) OVER (ORDER BY month) AS total_users
        FROM monthly_stats
    )
SELECT * FROM running ORDER BY month;

-- Recursive CTE: generate a series of dates
WITH RECURSIVE dates(d) AS (
    SELECT '2026-01-01'
    UNION ALL
    SELECT date(d, '+1 day') FROM dates WHERE d < '2026-01-31'
)
SELECT d FROM dates;

-- Recursive CTE: traverse a tree structure
WITH RECURSIVE category_tree AS (
    SELECT id, name, parent_id, 0 AS depth
    FROM categories WHERE parent_id IS NULL

    UNION ALL

    SELECT c.id, c.name, c.parent_id, ct.depth + 1
    FROM categories c
    JOIN category_tree ct ON c.parent_id = ct.id
)
SELECT substr('                ', 1, depth * 2) || name AS tree, depth
FROM category_tree ORDER BY depth, name;

WAL Mode and Concurrent Access

By default, SQLite uses "rollback journal" mode, which locks the entire database during writes. WAL (Write-Ahead Logging) mode is a major improvement: readers do not block writers, and writers do not block readers.

-- Enable WAL mode (persists until changed)
PRAGMA journal_mode = WAL;

-- How WAL works:
-- 1. Writes go to a separate WAL file (database-wal)
-- 2. Readers see a consistent snapshot from the main database + WAL
-- 3. A checkpoint merges WAL changes back into the main database
-- 4. Checkpoints happen automatically (default: every 1000 WAL pages)

-- Manual checkpoint
PRAGMA wal_checkpoint(TRUNCATE);

-- WAL mode concurrency:
-- Multiple readers: YES (unlimited concurrent reads)
-- Reader + Writer: YES (they do not block each other)
-- Multiple writers: NO (only one write transaction at a time)
-- Writer waits via busy_timeout if another write is in progress

-- Set busy timeout (milliseconds to wait for a lock)
PRAGMA busy_timeout = 5000;  -- Wait up to 5 seconds

-- Check current journal mode
PRAGMA journal_mode;

-- WAL2 mode note: SQLite has experimental WAL2 support in development
-- that would allow two concurrent writers, but it is not yet in stable releases.

PRAGMA Commands

PRAGMAs configure SQLite behavior at runtime. Some persist across connections (journal_mode), most do not (foreign_keys).

-- Essential PRAGMAs for production applications:

-- Enable foreign key enforcement (OFF by default!)
PRAGMA foreign_keys = ON;

-- WAL mode for better concurrency
PRAGMA journal_mode = WAL;

-- Set busy timeout to avoid SQLITE_BUSY errors
PRAGMA busy_timeout = 5000;

-- Increase cache size (negative = KB, positive = pages)
PRAGMA cache_size = -64000;  -- 64 MB cache

-- Enable memory-mapped I/O for faster reads
PRAGMA mmap_size = 268435456;  -- 256 MB

-- Synchronous mode: NORMAL is safe with WAL, much faster than FULL
PRAGMA synchronous = NORMAL;

-- Temp store in memory (faster for complex queries)
PRAGMA temp_store = MEMORY;

-- Get database info
PRAGMA page_size;          -- Default: 4096 bytes
PRAGMA page_count;         -- Total pages in database
PRAGMA database_list;      -- Attached databases
PRAGMA table_info(users);  -- Column info for a table
PRAGMA index_list(users);  -- Indexes on a table
PRAGMA integrity_check;    -- Verify database is not corrupt

-- Optimize (run periodically or on connection close)
PRAGMA optimize;

Backup and Migration

# Simple backup: just copy the file (with WAL, copy all three files)
cp myapp.db myapp.db.backup
cp myapp.db-wal myapp.db-wal.backup  -- if WAL mode
cp myapp.db-shm myapp.db-shm.backup  -- if WAL mode

# Safe backup using the sqlite3 CLI (.backup command uses the Online Backup API)
sqlite3 myapp.db ".backup backup.db"

# Export as SQL dump (portable, can import into any SQL database)
sqlite3 myapp.db .dump > backup.sql

# Import a SQL dump
sqlite3 new.db < backup.sql

# Export a single table
sqlite3 myapp.db ".dump users" > users.sql

# Vacuum: rebuild the database to reclaim space and defragment
sqlite3 myapp.db "VACUUM;"

# Vacuum into a new file (useful for creating a clean backup)
sqlite3 myapp.db "VACUUM INTO 'clean_backup.db';"

SQLite with Python

Python includes the sqlite3 module in its standard library. No installation needed.

import sqlite3

# Connect (creates the file if it does not exist)
conn = sqlite3.connect('myapp.db')
conn.execute("PRAGMA journal_mode=WAL")
conn.execute("PRAGMA foreign_keys=ON")
conn.execute("PRAGMA busy_timeout=5000")

# Use Row factory for dict-like access
conn.row_factory = sqlite3.Row

# Create tables
conn.execute("""
    CREATE TABLE IF NOT EXISTS users (
        id INTEGER PRIMARY KEY,
        name TEXT NOT NULL,
        email TEXT UNIQUE NOT NULL,
        created_at TEXT DEFAULT (datetime('now'))
    )
""")

# Insert with parameterized queries (NEVER use f-strings for SQL!)
conn.execute("INSERT INTO users (name, email) VALUES (?, ?)", ("Alice", "alice@example.com"))

# Insert many rows
users = [("Bob", "bob@example.com"), ("Charlie", "charlie@example.com")]
conn.executemany("INSERT INTO users (name, email) VALUES (?, ?)", users)
conn.commit()

# Query
cursor = conn.execute("SELECT * FROM users WHERE name LIKE ?", ("%ali%",))
for row in cursor:
    print(row["id"], row["name"], row["email"])

# Context manager for automatic commit/rollback
with conn:
    conn.execute("UPDATE users SET name = ? WHERE id = ?", ("Alice Smith", 1))
    conn.execute("DELETE FROM users WHERE id = ?", (3,))
    # Commits automatically; rolls back on exception

conn.close()

SQLite with Node.js

The better-sqlite3 package is the recommended SQLite driver for Node.js. It is synchronous (no callbacks or promises needed) and significantly faster than the async sqlite3 package.

import Database from 'better-sqlite3';

// Connect
const db = new Database('myapp.db');
db.pragma('journal_mode = WAL');
db.pragma('foreign_keys = ON');
db.pragma('busy_timeout = 5000');

// Create tables
db.exec(`
    CREATE TABLE IF NOT EXISTS users (
        id INTEGER PRIMARY KEY,
        name TEXT NOT NULL,
        email TEXT UNIQUE NOT NULL,
        created_at TEXT DEFAULT (datetime('now'))
    )
`);

// Prepared statements (reusable and safe from SQL injection)
const insert = db.prepare('INSERT INTO users (name, email) VALUES (?, ?)');
insert.run('Alice', 'alice@example.com');

// Insert many rows in a transaction (much faster)
const insertMany = db.transaction((users) => {
    for (const u of users) insert.run(u.name, u.email);
});
insertMany([
    { name: 'Bob', email: 'bob@example.com' },
    { name: 'Charlie', email: 'charlie@example.com' }
]);

// Query
const getUser = db.prepare('SELECT * FROM users WHERE id = ?');
const user = getUser.get(1);  // Single row
console.log(user.name, user.email);

const allUsers = db.prepare('SELECT * FROM users').all();  // All rows
console.log(allUsers.length);

// Transactions with automatic rollback on error
const transfer = db.transaction((fromId, toId, amount) => {
    db.prepare('UPDATE accounts SET balance = balance - ? WHERE id = ?').run(amount, fromId);
    db.prepare('UPDATE accounts SET balance = balance + ? WHERE id = ?').run(amount, toId);
});

try {
    transfer(1, 2, 100);
} catch (e) {
    console.error('Transfer failed:', e.message);
}

db.close();

Performance and Query Optimization

-- Use EXPLAIN QUERY PLAN to see how SQLite will execute your query
EXPLAIN QUERY PLAN SELECT * FROM users WHERE email = 'alice@example.com';
-- SEARCH users USING INDEX idx_users_email (email=?)  -- Good: using index
-- SCAN users                                          -- Bad: full table scan

-- EXPLAIN (detailed) shows individual bytecode operations
EXPLAIN SELECT * FROM users WHERE id = 1;

-- Batch inserts in transactions (100x faster than individual inserts)
-- Bad: 10,000 individual INSERT statements = 10,000 disk syncs
-- Good: BEGIN; 10,000 INSERTs; COMMIT; = 1 disk sync

-- Use prepared statements to avoid re-parsing SQL
-- (Handled automatically by Python sqlite3 and better-sqlite3)

-- Avoid ORDER BY on unindexed columns for large result sets
-- Create an index if you frequently sort by a column

-- Use COUNT wisely
SELECT COUNT(*) FROM users;                   -- Fast: scans index
SELECT COUNT(*) FROM users WHERE age > 30;   -- Needs index on age

-- Prefer EXISTS over COUNT for existence checks
-- Slow: SELECT COUNT(*) FROM orders WHERE user_id = 1 HAVING COUNT(*) > 0
-- Fast: SELECT EXISTS(SELECT 1 FROM orders WHERE user_id = 1)

-- ANALYZE: update query planner statistics
ANALYZE;  -- Analyze all tables
ANALYZE users;  -- Analyze one table

SQLite Extensions

SQLite supports loadable extensions. Key ones include Spatialite (geographic functions like PostGIS), sqlean (regex, UUIDs, crypto, statistics), sqlite-vec (vector similarity search for AI/ML), and built-in R-tree (spatial indexing for bounding boxes).

-- Load an extension at runtime
.load /path/to/sqlean

-- Or in Python:
conn.enable_load_extension(True)
conn.load_extension("/path/to/sqlean")

-- Example: sqlean provides regex()
SELECT * FROM users WHERE name REGEXP '^A[a-z]+$';

-- Example: generate UUIDs
SELECT uuid4();  -- 'a1b2c3d4-e5f6-4789-abcd-ef0123456789'

Production Patterns

SQLite in production is increasingly common. Tools like Litestream and LiteFS make it viable for web applications.

# Litestream: continuous replication of SQLite to S3/GCS/Azure
# Install Litestream, then replicate your database:
litestream replicate myapp.db s3://my-bucket/myapp.db

# Restore from a replica:
litestream restore -o myapp.db s3://my-bucket/myapp.db

# Litestream runs alongside your application, streaming WAL changes
# to cloud storage in near-real-time. Recovery point is typically
# within a few seconds of the latest write.

Connection pooling pattern: Since SQLite allows only one writer at a time, a common pattern is to use separate connections for reads and writes:

import sqlite3

# Single write connection
write_conn = sqlite3.connect('myapp.db')
write_conn.execute("PRAGMA journal_mode=WAL")
write_conn.execute("PRAGMA busy_timeout=5000")
write_conn.execute("PRAGMA synchronous=NORMAL")

# Multiple read connections (one per thread, or use a pool)
def get_read_conn():
    conn = sqlite3.connect('myapp.db', uri=True)
    conn.execute("PRAGMA query_only=ON")  -- Prevent accidental writes
    return conn

# This pattern maximizes read concurrency while serializing writes

Common Pitfalls and Troubleshooting

Summary

SQLite has grown far beyond a simple embedded database. With JSON support, FTS5, window functions, CTEs, WAL mode, and tools like Litestream, it is a serious option for many production workloads. Start with these essentials: enable WAL mode, set foreign_keys ON, configure busy_timeout, wrap bulk operations in transactions, and use EXPLAIN QUERY PLAN to verify your indexes. These five habits cover 90% of effective SQLite usage.

Related Resources

Frequently Asked Questions

When should I use SQLite instead of PostgreSQL or MySQL?

Use SQLite when your application is single-server, has moderate write traffic (under a few hundred writes per second), or needs an embedded database with zero configuration. SQLite excels for mobile apps, desktop applications, IoT devices, development and testing, and read-heavy web applications. Choose PostgreSQL or MySQL when you need high concurrent write throughput, multi-server replication, or advanced features like stored procedures and role-based access control.

Does SQLite support JSON data?

Yes, SQLite has robust JSON support built in since version 3.9.0. You can use json(), json_extract(), json_set(), json_remove(), json_each(), and json_tree() functions to store, query, and manipulate JSON data. SQLite also supports the -> and ->> shorthand operators for extracting JSON values. You can index JSON fields using expression indexes for fast lookups.

What is WAL mode in SQLite and should I enable it?

WAL (Write-Ahead Logging) mode changes how SQLite handles transactions. Instead of writing changes directly to the database file, changes are appended to a separate WAL file. This allows multiple readers to access the database simultaneously while a writer is active, dramatically improving concurrency. You should enable WAL mode for almost all applications by running PRAGMA journal_mode=WAL. The only exceptions are databases on network filesystems or read-only media.

How do I do full-text search in SQLite?

SQLite provides FTS5 (Full-Text Search version 5) as a built-in extension. Create a virtual table with CREATE VIRTUAL TABLE docs USING fts5(title, body), then query it with SELECT * FROM docs WHERE docs MATCH 'search terms'. FTS5 supports boolean operators (AND, OR, NOT), phrase queries, prefix queries, column filters, and ranking with bm25(). It builds an inverted index for fast text searching across large datasets.

Can SQLite handle concurrent access from multiple processes?

Yes, but with limitations. In WAL mode, SQLite allows multiple concurrent readers and one writer simultaneously. The writer does not block readers. However, only one write transaction can be active at a time; additional writers will wait (with a configurable busy timeout) or receive a SQLITE_BUSY error. For web applications with moderate write traffic, this is usually sufficient. For high-concurrency write workloads, consider PostgreSQL or use tools like Litestream for read replicas.

Related Resources

SQL Formatter
Format and beautify SQL queries instantly
SQL Joins Complete Guide
Deep dive into all JOIN types with practical examples
PostgreSQL Complete Guide
Comprehensive PostgreSQL guide for developers
JSON Complete Guide
Everything about JSON for developers