Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Timescaler here, if you configure the timescaledb.compress_segmentby well, and the data suits the compression, you can achieve 20x or more compression.

(On some metrics data internally, I have 98% reduction in size of the data).

One of the reasons this works is due to only having to pay the per-tuple overhead once per grouped row, which could be as much as a 1000 rows.

The other is the compression algorithm, which can be TimescaleDB or plain PostgreSQL TOAST

https://www.timescale.com/blog/time-series-compression-algor... https://www.postgresql.org/docs/current/storage-toast.html



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: