Fine-grained Control of Concurrency within KV-SSDs

May 31, 2021

data-systemskv-ssdconcurrencyperformance

KV-SSDs handle multiple I/O requests concurrently, but the interactions between concurrent operations inside the device — between indexing, data placement, and garbage collection — create contention that limits throughput and inflates tail latency.

Internal Contention

When a KV-SSD processes concurrent requests, multiple operations compete for shared resources: the index structure, the flash translation layer, and the flash channels themselves. A long-running garbage collection operation can block incoming reads. Concurrent writes to the same index bucket create serialization points. Without careful management, these interactions produce unpredictable performance — high average throughput but with occasional latency spikes that violate application SLAs.

Fine-grained Control

This work introduces fine-grained concurrency control mechanisms specifically designed for the KV-SSD environment. Rather than using coarse-grained locks that serialize entire operation categories, the approach manages concurrency at the granularity of individual resources — specific index regions, flash channels, and garbage collection units. This allows independent operations to proceed in parallel while only serializing operations that actually conflict.

Results

The fine-grained approach improves both throughput and tail latency under mixed workloads. By reducing unnecessary serialization, more of the device’s internal parallelism is exposed to the workload. The improvement is most pronounced under high concurrency with mixed read-write patterns — the exact scenario that is most challenging for coarse-grained approaches.

Significance

As KV-SSDs move from research prototypes toward production deployment, internal concurrency management becomes critical. Applications expect consistent, predictable performance from their storage layer. This work provides the mechanisms needed to deliver that predictability at high utilization levels.

Published at ACM SYSTOR 2021 (14th ACM International Conference on Systems and Storage, 29.9% acceptance rate).