Both CSharpDB and SQLite are excellent embedded databases for .NET, and "which one is faster?" is the wrong question. The right question is: for each workload, which API on which engine gets you the best number? Sometimes that's SQLite through ADO.NET. Sometimes it's CSharpDB through the direct engine API. Sometimes the gap between the two engines is smaller than the gap between a good API choice and a bad one on either side.
This post walks through four paired benchmarks on .NET 10 — bulk inserts, point lookups, concurrent writers, and EF Core — and shows the compilable code that produced each number. Everything in this post runs end-to-end with dotnet run -c Release.
The Setup
Hardware: Intel Core i9-11900K, .NET 10.0.6, Windows 11. All benchmarks use BenchmarkDotNet's --job short (3 warmup + 3 iterations). Both engines work against the same schema:
CREATE TABLE products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL NOT NULL,
category TEXT
)
SQLite runs through Microsoft.Data.Sqlite 10.0.4. CSharpDB runs through three surfaces: CSharpDB.Data (ADO.NET), CSharpDB.Engine (direct engine API), and CSharpDB.EntityFrameworkCore. The full benchmark harness is open source — search for ClaudeBench on GitHub if you want to reproduce.
Let's walk through the four lessons, cheapest first.
Lesson 1: For Bulk Inserts, Use InsertBatch
The naive CSharpDB path — open a CSharpDbConnection, prepare a parameterised INSERT, loop N times inside a transaction — is the same pattern you'd use on SQLite. But it's the slow path on CSharpDB. For a 10,000-row bulk load, ADO.NET-per-row measures 243 ms. SQLite through the same pattern measures 29 ms. That looks like CSharpDB losing by 8×.
The fix is to stop sending one statement per row. Database.PrepareInsertBatch accepts a typed row buffer that is committed as a single atomic insert:
using CSharpDB.Engine;
using CSharpDB.Primitives;
await using var db = await Database.OpenAsync("bulk.cdb");
await db.ExecuteAsync("""
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL NOT NULL,
category TEXT
)
""");
var batch = db.PrepareInsertBatch("products", initialCapacity: 10_000);
for (int i = 1; i <= 10_000; i++)
{
batch.AddRow(
DbValue.FromInteger(i),
DbValue.FromText("Product"),
DbValue.FromReal(9.99 + i),
DbValue.FromText("Books"));
}
int inserted = await batch.ExecuteAsync();
Console.WriteLine($"Inserted {inserted} rows");
The same workload through PrepareInsertBatch measures 29 ms median — tied with SQLite on the same machine. The difference is architectural: the batch is dispatched as a single multi-row insert against the engine, so per-row parsing, per-row plan lookup, and per-row parameter binding all collapse into one operation.
For comparison, here is what the equivalent SQLite fast path looks like:
using Microsoft.Data.Sqlite;
await using var conn = new SqliteConnection("Data Source=bulk.db");
await conn.OpenAsync();
await using (var cmd = conn.CreateCommand())
{
cmd.CommandText = """
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL NOT NULL,
category TEXT
)
""";
await cmd.ExecuteNonQueryAsync();
}
await using var tx = (SqliteTransaction)await conn.BeginTransactionAsync();
await using var insert = conn.CreateCommand();
insert.Transaction = tx;
insert.CommandText =
"INSERT INTO products (id, name, price, category) VALUES (@id, @name, @price, @cat)";
var id = insert.CreateParameter(); id.ParameterName = "@id";
var name = insert.CreateParameter(); name.ParameterName = "@name";
var price = insert.CreateParameter(); price.ParameterName = "@price";
var cat = insert.CreateParameter(); cat.ParameterName = "@cat";
insert.Parameters.Add(id);
insert.Parameters.Add(name);
insert.Parameters.Add(price);
insert.Parameters.Add(cat);
insert.Prepare();
for (int i = 1; i <= 10_000; i++)
{
id.Value = i;
name.Value = "Product";
price.Value = 9.99 + i;
cat.Value = "Books";
await insert.ExecuteNonQueryAsync();
}
await tx.CommitAsync();
That's SQLite at its best: prepared statement, explicit transaction, parameter reuse. 29 ms for 10,000 rows. The CSharpDB InsertBatch code above lands at the same number with less ceremony — no parameter setup, no prepared statement, no manual transaction. The packages you'll need:
<ItemGroup>
<PackageReference Include="CSharpDB" Version="3.1.2" />
<PackageReference Include="Microsoft.Data.Sqlite" Version="10.0.4" />
</ItemGroup>
The practical takeaway: on CSharpDB, InsertBatch is not an optimization — it is the bulk-load API. Reach for it any time you're inserting more than a handful of rows from known data.
Lesson 2: Point Lookups Favour CSharpDB by Two Orders of Magnitude
Single-key lookups are where CSharpDB's storage layer has the clearest advantage. On a 10,000-row warm table with a prepared command, SQLite through ADO.NET averages 63 μs per lookup. CSharpDB through its own ADO.NET provider is 2.4 μs — 26 times faster. Dropping to the direct engine API takes it down further:
using CSharpDB.Engine;
await using var db = await Database.OpenAsync("bulk.cdb");
async ValueTask<string?> GetProductName(long id)
{
await using var result = await db.ExecuteAsync(
$"SELECT name FROM products WHERE id = {id}");
await foreach (var row in result.GetRowsAsync())
return row[0].AsText;
return null;
}
Console.WriteLine(await GetProductName(42));
That path measures 525 ns — roughly 120× faster than SQLite through ADO.NET. The speed comes from a short-circuit in Database.ExecuteAsync that recognises primary-key lookups and goes straight to the B-tree without running the full planner. You don't need a prepared command, and you don't need a special API — the engine does it for you.
If you need parameterised queries against user input (the normal case), the ADO.NET provider still keeps most of that advantage:
using CSharpDB.Data;
await using var conn = new CSharpDbConnection("Data Source=bulk.cdb");
await conn.OpenAsync();
await using var cmd = conn.CreateCommand();
cmd.CommandText = "SELECT name FROM products WHERE id = @id";
cmd.Parameters.AddWithValue("@id", 42L);
await using var reader = await cmd.ExecuteReaderAsync();
string? name = await reader.ReadAsync() ? reader.GetString(0) : null;
Console.WriteLine(name);
The ADO.NET wrapper adds ~1.9 μs of overhead per lookup for parameter binding, reader state, and DbCommand plumbing — still well under SQLite's 63 μs. If you serve a read-heavy API from an embedded database, this is where CSharpDB earns its keep: four orders of magnitude more cache-friendly lookups per second than the equivalent SQLite path, without changing anything else about your code.
Lesson 3: Concurrent Writers Are CSharpDB's Territory
This is the benchmark where the architectures diverge most. SQLite is fundamentally single-writer — WAL mode lets readers run alongside a writer, but writers themselves must take the reserved lock one at a time. Eight concurrent tasks doing 1,250 disjoint inserts each still serialize at the write lock.
CSharpDB 3.x exposes ImplicitInsertExecutionMode.ConcurrentWriteTransactions, which routes shared auto-commit inserts through isolated WriteTransaction state. Combined with UseDurableGroupCommit, disjoint-key writers can overlap and coalesce their fsyncs. The numbers on our box, for 10,000 total rows split evenly across N writers:
| Writers | SQLite (WAL, synchronous=NORMAL) | CSharpDB (multi-writer config) | Ratio |
|---|---|---|---|
| 1 | 18.9 ms | 49.7 ms | SQLite 2.6× faster |
| 2 | 172 ms | 58.8 ms | CSharpDB 2.9× faster |
| 4 | 475 ms | 85.8 ms | CSharpDB 5.5× faster |
| 8 | 1,094 ms | 125 ms | CSharpDB 8.8× faster |
SQLite scales almost linearly with writer count — which is exactly what you'd expect from serialised writers. CSharpDB stays nearly flat, and the crossover happens at just two writers. Here is the CSharpDB side of that benchmark end-to-end:
using CSharpDB.Engine;
using CSharpDB.Primitives;
const int TotalRows = 10_000;
const int Writers = 8;
var options = new DatabaseOptions
{
ImplicitInsertExecutionMode = ImplicitInsertExecutionMode.ConcurrentWriteTransactions,
}.ConfigureStorageEngine(builder =>
{
builder.UseWriteOptimizedPreset();
builder.UseDurableGroupCommit(TimeSpan.FromMilliseconds(0.25));
});
await using var db = await Database.OpenAsync("ingest.cdb", options);
await db.ExecuteAsync("""
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL NOT NULL,
category TEXT
)
""");
int rowsPerWriter = TotalRows / Writers;
Task[] tasks = Enumerable.Range(0, Writers)
.Select(writerIndex => Task.Run(async () =>
{
int startId = writerIndex * rowsPerWriter + 1;
// Each writer owns a disjoint primary-key range — no conflicts.
var batch = db.PrepareInsertBatch("products", rowsPerWriter);
for (int i = 0; i < rowsPerWriter; i++)
{
batch.AddRow(
DbValue.FromInteger(startId + i),
DbValue.FromText("Product"),
DbValue.FromReal(9.99 + i),
DbValue.FromText("Books"));
}
await batch.ExecuteAsync();
}))
.ToArray();
await Task.WhenAll(tasks);
Three things matter in that configuration:
ImplicitInsertExecutionMode.ConcurrentWriteTransactionslets auto-commit inserts run on isolated per-writer transaction state instead of serialising on the main write gate.UseWriteOptimizedPreset()raises the checkpoint frame threshold and moves checkpoint work to a background slice, so the commit path stays tight.UseDurableGroupCommit(0.25 ms)gives the flush leader a short window to coalesce overlapping commits into a single OS fsync. On a workload with eight concurrent writers, that turns eight flushes into closer to one or two.
The equivalent SQLite code for comparison is the standard connection-per-worker pattern with WAL mode enabled:
using Microsoft.Data.Sqlite;
const int TotalRows = 10_000;
const int Writers = 8;
const string Cs = "Data Source=ingest.db;Pooling=True";
// WAL + NORMAL sync is the most concurrency-friendly SQLite configuration.
await using (var init = new SqliteConnection(Cs))
{
await init.OpenAsync();
await using var cmd = init.CreateCommand();
cmd.CommandText = @"
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;
PRAGMA busy_timeout = 30000;
CREATE TABLE IF NOT EXISTS products (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price REAL NOT NULL,
category TEXT
);";
await cmd.ExecuteNonQueryAsync();
}
int rowsPerWriter = TotalRows / Writers;
Task[] tasks = Enumerable.Range(0, Writers)
.Select(writerIndex => Task.Run(async () =>
{
int startId = writerIndex * rowsPerWriter + 1;
await using var conn = new SqliteConnection(Cs);
await conn.OpenAsync();
await using var tx = (SqliteTransaction)await conn.BeginTransactionAsync();
await using var cmd = conn.CreateCommand();
cmd.Transaction = tx;
cmd.CommandText =
"INSERT INTO products (id, name, price, category) VALUES (@id, @name, @price, @cat)";
var id = cmd.CreateParameter(); id.ParameterName = "@id";
var name = cmd.CreateParameter(); name.ParameterName = "@name";
var price = cmd.CreateParameter(); price.ParameterName = "@price";
var cat = cmd.CreateParameter(); cat.ParameterName = "@cat";
cmd.Parameters.Add(id);
cmd.Parameters.Add(name);
cmd.Parameters.Add(price);
cmd.Parameters.Add(cat);
cmd.Prepare();
for (int i = 0; i < rowsPerWriter; i++)
{
id.Value = startId + i;
name.Value = "Product";
price.Value = 9.99 + i;
cat.Value = "Books";
await cmd.ExecuteNonQueryAsync();
}
await tx.CommitAsync();
}))
.ToArray();
await Task.WhenAll(tasks);
Two notes on interpreting the numbers. First, the win is for disjoint-key inserts — each writer owns a slice of the primary-key space. A workload where every writer is hammering the same monotonic right edge of the index (typical for a shared identity column with no partitioning) will not see the same fan-in. Second, CSharpDB is slower at W=1 because UseDurableGroupCommit adds a small batching window and WriteOptimizedPreset trades single-writer latency for multi-writer throughput. Use the default preset if you know your workload is single-threaded.
Lesson 4: Storage Presets Trade Small Amounts of Durability for Write Speed
CSharpDB exposes several presets on the StorageEngineOptionsBuilder that cover the common tuning axes. For the 10,000-row single-writer bulk insert benchmark, the spread across presets looked like this:
| Preset | Median | When to use it |
|---|---|---|
| Default | 29 ms | Single writer, no contention |
UseWriteOptimizedPreset |
33 ms | Sustained write load, background checkpoints |
UseLowLatencyDurableWritePreset |
35 ms | Deferred planner stats, commit-path focus |
WriteOptimized + DurableGroupCommit(0.25ms) |
31 ms | Multi-writer disjoint keys (see Lesson 3) |
None of the write presets beat Default on a single-writer workload — that was an interesting finding. They earn their keep under contention or on long-lived write-heavy services where the checkpoint pattern matters. Switching them on is a one-line change:
using CSharpDB.Engine;
var options = new DatabaseOptions()
.ConfigureStorageEngine(builder => builder.UseWriteOptimizedPreset());
await using var db = await Database.OpenAsync("app.cdb", options);
Read-side presets like UseDirectLookupOptimizedPreset, UseHybridFileCachePreset, and UseDirectColdFileLookupPreset follow the same pattern, and again, the default is already well-tuned — measure before adopting a preset.
Matching Durability Is Non-Negotiable for Fair Comparisons
If you're running your own CSharpDB-vs-SQLite numbers, the single biggest trap is comparing engines under mismatched durability settings. SQLite's default is journal_mode=DELETE with synchronous=FULL — an fsync per commit, full rollback journal. That's strict, and slow. CSharpDB's defaults are WAL-based with different fsync semantics.
The benchmarks in this post deliberately matched durability where it mattered: in the multi-writer test, SQLite was configured with PRAGMA journal_mode=WAL; synchronous=NORMAL, which is the industry-standard "fast but crash-safe" configuration. If you see any CSharpDB benchmark that beats SQLite by 10× on a single-row insert, check the durability settings before believing it.
A practical checklist when writing comparison code:
- SQLite: set
journal_mode(WAL for concurrent readers, MEMORY/OFF for pure speed) andsynchronous(FULL for durable, NORMAL for WAL-safe, OFF for unsafe) explicitly in the connection initialisation. - CSharpDB: be explicit about the preset. The engine's default uses WAL and durable fsync on commit; the
Buffereddurability mode and theLowLatencyDurableWritepreset weaken specific guarantees — document the trade when you use them. - Both: use the same row count, same schema, same iteration harness, and the same batch size. BenchmarkDotNet's paired baseline/candidate pattern makes this easy.
EF Core: Parity on Writes, Bug on Reads
The CSharpDB EF Core provider is newer than the ADO.NET one, and that shows up in the numbers. For bulk insert through DbContext.SaveChanges, CSharpDB and SQLite are essentially tied at 10,000 rows — CSharpDB even allocates 22% less memory. For DbSet.Find by primary key, the picture flips hard: SQLite EF Core averaged 72 μs, CSharpDB EF Core averaged 9,480 μs on the same warm 10,000-row table. That's a 130× regression at the EF layer despite the underlying engine being 120× faster than SQLite on the same lookup.
The practical workaround today is to drop a layer for hot lookups. Either use the ADO.NET provider inside your service:
using CSharpDB.Data;
public sealed class ProductReader(string connectionString)
{
public async Task<string?> GetNameAsync(long id)
{
await using var conn = new CSharpDbConnection(connectionString);
await conn.OpenAsync();
await using var cmd = conn.CreateCommand();
cmd.CommandText = "SELECT name FROM products WHERE id = @id";
cmd.Parameters.AddWithValue("@id", id);
await using var reader = await cmd.ExecuteReaderAsync();
return await reader.ReadAsync() ? reader.GetString(0) : null;
}
}
Or, if you own the context lifecycle, resolve the underlying CSharpDbConnection through the EF Core context and reuse it for the fast path:
using System.Data.Common;
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Infrastructure;
using var ctx = new AppDbContext(options);
// Writes through EF Core: tracked entities, migrations, change tracking.
ctx.Products.Add(new Product { Id = 1, Name = "Widget", Price = 9.99 });
await ctx.SaveChangesAsync();
// Reads through the same physical connection, bypassing the EF provider.
DbConnection conn = ctx.Database.GetDbConnection();
if (conn.State != System.Data.ConnectionState.Open)
await conn.OpenAsync();
await using var cmd = conn.CreateCommand();
cmd.CommandText = "SELECT name FROM products WHERE id = @id";
var idParam = cmd.CreateParameter();
idParam.ParameterName = "@id";
idParam.Value = 1L;
cmd.Parameters.Add(idParam);
await using var reader = await cmd.ExecuteReaderAsync();
string? name = await reader.ReadAsync() ? reader.GetString(0) : null;
That keeps the write surface ergonomic while routing hot reads through the fast path the engine is already capable of serving.
A Decision Guide
Putting the four lessons together, here's the short decision tree:
| If your workload is… | Pick… | Why |
|---|---|---|
| Single-threaded bulk load | CSharpDB InsertBatch |
Parity with SQLite; simpler API; less boilerplate |
| Read-heavy, point-lookup-dominated | CSharpDB engine API or ADO.NET | 26–120× faster than SQLite ADO.NET |
| Concurrent writers, disjoint keys | CSharpDB with ConcurrentWriteTransactions + DurableGroupCommit |
5–9× faster than SQLite at 4–8 writers |
| Concurrent writers, shared hot key range | Either engine, expect linear scaling | Fan-in is fundamentally limited in both |
| EF Core writes | Either engine; CSharpDB has slightly lower allocations | Roughly tied at 10k-row SaveChanges |
| EF Core hot reads (today) | SQLite through EF Core, or CSharpDB with an ADO.NET escape hatch | The CSharpDB EF Core Find path is slow in 3.1.x |
| Mixed workload with unknown ratios | CSharpDB with defaults | Read advantage dominates most real app profiles |
Reproducing the Numbers
The full benchmark suite that produced the numbers in this post is available as a BenchmarkDotNet project called ClaudeBench. It references CSharpDB from source so you can point it at any commit, and it runs both engines through paired benchmark classes with matching call patterns. On any modern .NET 10 machine:
dotnet run -c Release -- --filter "*" --job short
You'll see the same shape — SQLite winning single-writer bulk load by a hair, CSharpDB winning point lookups by two orders of magnitude, and CSharpDB pulling ahead on anything with more than one concurrent writer. The absolute numbers will vary with your CPU, disk, and filesystem.
Closing Thoughts
SQLite is a legendary piece of software, and the right default for a large class of .NET applications. CSharpDB isn't trying to replace it — it's a pure-C# embedded engine with a different set of trade-offs, and those trade-offs show up clearly when you put both engines under the same benchmark harness.
The real signal from this exercise isn't the horse-race: it's that the API you pick inside each engine matters at least as much as the engine itself. InsertBatch turns a 243 ms CSharpDB insert into a 29 ms one. Dropping from EF Core to ADO.NET turns a 9.5 ms CSharpDB lookup into a 2.4 μs one. Matching durability turns an apples-to-oranges comparison into an apples-to-apples one. Those levers exist on both engines, and pulling them is usually cheaper than switching engines.
If your workload is a good fit for CSharpDB's sweet spots — concurrent disjoint-key writes, hot point-lookups, or an engine-API that gets out of your way — the numbers will reward you. If it isn't, the same benchmark harness will tell you that too.