@irjustin reaching out to state the regressions reported in that issue have mitigated, with some of them even surpassing 6.2 values.
On the latest example of that issue ( EVAL performance ) we're now at 6.2.6 levels as stated in:
Furthermore, as stated in https://redis.com/blog/improving-redis-performance/, we're proactively improving Redis performance both on 7.0 and future 7.2, with some cases reaching more than 5X boost in the achievable ops/sec on client p50 latency. This a work not only from Redis Ltd but multiple partners and even competitors =)
Some examples of the Redis 7.0 and 7.2 development cycle performance improvements:
- Use snprintf once in addReplyDouble. Measured improvement of simple ZADD of about 25%.
- Moving client flags to a more cache friendly position within client struct. Regained the lost 2% of CPU cycles since v6.2.
- Optimizing d2string() and addReplyDouble() with grisu2. If we look at ZRANGE WITHSCORES command impact we saw 23% improvement on the achievable ops/sec on replies with 10 elements, 50% on replies with 100 elements and 68% on replies with 1,000 elements.
- Optimize stream id sds creation on XADD key *. Results: about 20% saved CPU cycles.
- Use either monotonic or wall-clock to measure command execution time. ,Regained up to 4% execution time.
- Avoid deferred array reply on ZRANGE commands BYRANK. Regain from 3 to 15% lost performance since v5 due to added features.
- Optimize deferred replies to use shared objects instead of sprintf. Measured improvement from 3% to 9% on ZRANGE command.
- Change compiler optimizations to -O3 -flto. Measured up to 5% performance gain in the benchmark SPEC tests.
- Optimized GEO commands ( GEODIST, GEOSEARCH BYBOX and BYRADIOUS ) leading to up to 5.4x more ops/sec and still drop in latency of up to 6.4X in the p50 latency.
Taking this opportunity to also remind that our goal (Redis Performance Teams) is to make Redis Performance open and free of bias in any manner. Anyone can contribute in https://github.com/redis/redis-benchmarks-specification either by asking for specific use-cases to be benchmarked, sharing how they're using Redis so we can map that to new benchmarks, and as always submitting PRs to redis itself.
https://github.com/redis/redis/issues/10981#issuecomment-134...
Furthermore, as stated in https://redis.com/blog/improving-redis-performance/, we're proactively improving Redis performance both on 7.0 and future 7.2, with some cases reaching more than 5X boost in the achievable ops/sec on client p50 latency. This a work not only from Redis Ltd but multiple partners and even competitors =)
Some examples of the Redis 7.0 and 7.2 development cycle performance improvements:
- Use snprintf once in addReplyDouble. Measured improvement of simple ZADD of about 25%.
- Moving client flags to a more cache friendly position within client struct. Regained the lost 2% of CPU cycles since v6.2.
- Optimizing d2string() and addReplyDouble() with grisu2. If we look at ZRANGE WITHSCORES command impact we saw 23% improvement on the achievable ops/sec on replies with 10 elements, 50% on replies with 100 elements and 68% on replies with 1,000 elements.
- Optimize stream id sds creation on XADD key *. Results: about 20% saved CPU cycles.
- Use either monotonic or wall-clock to measure command execution time. ,Regained up to 4% execution time.
- Avoid deferred array reply on ZRANGE commands BYRANK. Regain from 3 to 15% lost performance since v5 due to added features.
- Optimize deferred replies to use shared objects instead of sprintf. Measured improvement from 3% to 9% on ZRANGE command.
- Change compiler optimizations to -O3 -flto. Measured up to 5% performance gain in the benchmark SPEC tests.
- Optimized GEO commands ( GEODIST, GEOSEARCH BYBOX and BYRADIOUS ) leading to up to 5.4x more ops/sec and still drop in latency of up to 6.4X in the p50 latency.
You can check the redis repo PRs that affect performance easily via: https://github.com/redis/redis/pulls?q=is%3Apr+label%3Aactio...
Taking this opportunity to also remind that our goal (Redis Performance Teams) is to make Redis Performance open and free of bias in any manner. Anyone can contribute in https://github.com/redis/redis-benchmarks-specification either by asking for specific use-cases to be benchmarked, sharing how they're using Redis so we can map that to new benchmarks, and as always submitting PRs to redis itself.