perf: batch INSERT for PostgreSQL append_events #360
Labels
No labels
adr
automated
bug
chore
dependencies
documentation
enhancement
epic
github-actions
P1-high
P2-medium
P3-low
release
research
rust
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
jwilger/eventcore#360
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Problem
PostgresEventStore::append_events()runs a separatequery().execute()for each event inside the transaction. For 100 events, that's 100 individual SQL round-trips rather than a single multi-row INSERT.Benchmark data:
Marten (.NET) achieves 3.6 ms for a 100-event batch on the same PostgreSQL setup.
Proposed Solution
Replace the per-event INSERT loop with a single multi-row INSERT statement (or use PostgreSQL's COPY protocol for larger batches). All events in a single
append_events()call share the same transaction, so they can be batched into one statement.Expected Impact
100-event append: 16.7 ms → ~5-6 ms (2-3x improvement), bringing PostgreSQL throughput closer to Marten-competitive numbers.
Location
eventcore-postgres/src/lib.rs—append_events()method, the per-event INSERT loop.Benchmark Baseline
Run
cargo bench -p eventcore-bench --bench store_operations -- 'store/append/postgres'to measure before/after.