S3 backed key/value database for infrequent read access
infreqdb might be useful if :-
- Your database is quite large.
- You mostly do bulk updates.
- Most of the data is cold, i.e. only a small subset of data is typically queried.
- You are able to partition your data in a way such that hot and cold objects live on different partitions.
- Your data can fit in key/value model
The source of truth of all data is a bucket in S3. The data is split into multiple partitions. Each partition is a Bolt database file. infreqdb caches partitions on disk. Changes to a partition is done by re-writing and uploading an entire partition. The partitions are stored gzipped.
I have a PostgreSQL database(mostly time series) thats consuming about 500GB (and growing) of storage. The data is output of batch processing scripts, which process an hour worth of data each time and merge it in the database. The queries are mostly for fresh data.
500GB of might take 1500 GB disk storage - 2 replicas for HA and 250GB extra per replica to accommodate growth. Whereas the same data compressed might be 300GB (I haven't done an export yet) on S3. S3 is already replicated.
For comparison.
- 1500 GB EBS(gp2) costs $150/month
- 300 GB on S3 costs $6.9/month + extra for requests, no bandwidth charges if running in EC2.
infreqdb is a library, not a database server.
Example: toyexample.
- Make storage pluggable.
- Make cluster that can gossip evictions, take ownership of a portion of data.
- Allow cached partitions to persist across restarts.
I have not yet used infreqdb for anything large, just the toyexample