perf: streaming reads for large event streams (PostgreSQL/SQLite) #364
Labels
No labels
adr
automated
bug
chore
dependencies
documentation
enhancement
epic
github-actions
P1-high
P2-medium
P3-low
release
research
rust
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
jwilger/eventcore#364
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Problem
Both PostgreSQL (
fetch_all()) and SQLite backends collect the entire event history into aVecbefore returning fromread_stream(). For large streams (10K+ events), this means:Proposed Solution
Return a streaming iterator or async stream from
read_stream()that deserializes and yields events one at a time (or in small batches). TheEventStreamReadertype could wrap a stream internally while maintaining the current API for consumers that want to collect.This would require changes to the
EventStoretrait'sread_streamreturn type.Expected Impact
Caveats
This is an API-level change to the
EventStoretrait that would affect all backend implementations. The currentEventStreamReaderwithinto_iter()may need to become aStreamor provide both collected and streaming modes.Location
eventcore-types/src/store.rs—EventStore::read_streamreturn type,EventStreamReaderBenchmark Baseline
Run
cargo bench -p eventcore-bench --bench store_operations -- 'store/read_stream'to measure before/after.