Build Your Own Key-Value Storage Engine—Week 5
Curious how leading engineers tackle extreme scale challenges with data-intensive applications? Join Monster Scale Summit (free + virtual). It’s hosted by ScyllaDB, the monstrously fast and scalable database. I’ll also give a talk there, so feel free to join! Agenda Week 0: Introduction Week 1: In-Memory Store Week 2: LSM Tree Foundations Week 3: Durability with Write-Ahead Logging Week 4: Deletes, Tombstones, and Compaction Week 5: Leveling and Key-Range Partitioning Last week, you implemented deletion and compaction, making sure the LSM tree wouldn’t grow indefinitely. Still, there’s a weak spot: in the worst-case scenario (e.g., on a key miss), a single read has to scan all SSTables. To address this, you will implement leveling, a core idea in LSM trees. Instead of a single flat list of SSTables, leveling stores data across multiple levels: , , , etc. gets compacted to and makes space for future memtable flushes. gets compacted to and makes space for compaction. gets compacted to and makes space for compaction. gets compacted to and makes space for compaction. This process is called level compaction. Something important to understand: is slightly different from all the other levels. is created during memtable flushes. If a key already exists at and also in the memtable, the next flush can write that key again to a new file. In other words, can have overlapping keys. For all the other levels ( to ), that’s not the case. They are created by compaction, which removes duplicates and produces non-overlapping key ranges. In this week’s simplified design, an to compaction takes all SSTables from and , performs a k-way merge, then rewrites fully. As a result, each key appears at most once per level from downward. What’s the consequence of non-overlapping keys? You can improve lookups using a simple range-to-file mapping, for example: Keys from to are stored in this SSTable. Keys from to are stored in this SSTable. With this setup, a read checks only one SSTable per level from to . is the exception due to overlaps, so a read may still need to scan all SSTables. 💬 If you want to share your progress, discuss solutions, or collaborate with other coders, join the community Discord server ( channel): Join the Discord Limit the number of levels to two: , which may contain overlapping keys. , no overlapping keys. Create a folder for each level: , and . Keep one global file at the root. You will create a layout for both and : remains a simple list of SSTables. allows key-range partitioning. For example: This indicates: is composed of three SSTables: Keys between (included) and (excluded) live in . Keys between (included) and (excluded) live in . Keys between (included) and (excluded) live in . The main goal of the compaction process is to compact both and . At the end, you should merge all the data from and into . will be left empty. When reaches five full SSTable files (2,000 entries each), run an → compaction: Open iterators on all and SSTables. Apply the k-way merge algorithm: Comparator: Primary: . Tie-break (equal ): Prefer over . At , prefer the newest SSTable. Version order: any record from is newer than records from . Within , newer files win (same as week 4). Keep at most one record per key (newest wins). Tombstones: because is the bottom level, drop a tombstone if no older value for that key remains in the merge result. Create new L1 SSTables with at most 2,000 entries. When naming new L1 files, make sure they are unique. For example, if contains and , the first SSTable file created should be . Publish atomically: each new file the directory. Update the atomically. the file. the root directory (the directory containing the file and and folders). Delete obsolete L1 files, then . Delete all files in , then . The logic is unchanged from previous weeks. The only difference is that flush writes to and updates the file in the section. Check the memtable. If not found, scan all files newest to oldest using section of the . If not found at : Use the section of the to choose the one shard that contains the key’s range, then read only that L1 file. Return the value if found; otherwise, return . There are no changes to the client. Run it against the same file ( put-delete.txt ) to validate that your changes are correct. Introducing leveling has a fundamental impact on deletions. With a single level, compaction sees all versions of every key at once, so a tombstone can be dropped as soon as it has “killed“ every older record for that key. Yet, the rule we mentioned last week holds true: a tombstone can be evicted only after all data it shadows no longer exist on disk. With multiple levels, compaction must propagate tombstones downward. It’s only at the bottommost level that tombstones can be dropped, because only there you can prove they no longer shadow any other records. As an optional task, make the number of levels configurable: , , …, : Define a size ratio so each level has a target size larger than the previous one. Keep one directory per level: , , …, . Keep a single global . When a level reaches its max number of SSTables (derived from the size ratio), compact that level into the next. Only drop tombstones at the bottommost level . At any intermediate level with , propagate the tombstone downward during compaction. Implement : Return all keys between (included) and (excluded). Use put-delete-scan.txt to validate that your changes are correct. It introduces the keyword. For example: This line means: between (included) and (excluded), the keys are , , (the output will always be sorted) NOTE : If this route conflicts with , rename the single-key route to . That’s it for this week! Your LSM tree is taking shape. You implemented leveling, a key LSM design idea, and refined compaction so reads are tighter and storage stays under control. In two weeks, you will revisit the week 2 choice of JSON for SSTables. You will switch to block-based SSTables to reduce parsing and I/O overhead and add indexing within each SSTable. We mentioned that, because of key overlaps, a read may still need to scan all SSTables (e.g., key miss). This is the main reason why is typically kept small. In general, each level is larger than the one above it by a fixed size ratio (e.g., 10×). Some databases even use less static mechanisms. For instance, RocksDB relies on Dynamic Leveled Compaction , where the size of each level is automatically adjusted based on the size of the oldest (last) level, eliminating the need to define each level’s size statically. Regarding compaction, you should know that in real-world databases, it isn’t done in batch mode across all data. Let’s understand why. Suppose you have four levels and a layout like this for one key: The key exists at L3. The key doesn’t exist at L2. The key is updated at L1. A tombstone is placed at L0. You can’t compact L0 with L1/L2/L3 in one shot; that would mean checking every SSTable against every level. What happens in reality is that compaction is a promotion process. In our example, the tombstone at L0 is promoted to L1. Implementations ensure that it either (a) is compacted together with the L1 SSTable it shadows, or (b) waits until that L1 data is promoted to L2. The same rule repeats level by level, until the tombstone reaches L3 and finally removes the shadowed value. Meanwhile, it’s essential to understand that compaction is crucial in LSM trees. Let’s take some perspective to understand the reason. An LSM tree buffers writes in a memtable and flushes to L0. Compaction merges SSTables across levels to control read amplification. If compaction falls behind, L0 files accumulate, flushes slow down (or stall at file-count thresholds), write latency climbs, and in the worst case, you can observe write pauses. Not because the memtable is “locked,” but because the engine can’t safely create more L0 files until compaction catches up. This is one of the reasons why the RUM conjecture we introduced last week is important. If you compact too eagerly, you burn a lot of disk I/O and lose the LSM’s write advantage. If you compact too lazily, you incur a penalty on your read path. If you compact everything all the time, you incur a space-amplification penalty during compaction roughly equal to the working set size. Because compaction is so important, most key-value stores support parallel compactions across levels (except → , which isn’t parallelized due to overlapping key ranges in L0). You should also be aware that ongoing research keeps improving compaction. For example, the SILK: Preventing Latency Spikes in LSM Key-Value Stores paper analyzes why LSM systems can exhibit high tail latency. The main reason is that limited I/O bandwidth causes interference between client writes, flushes, and compactions. The key takeaway is that not all internal operations are equal. The paper explores solutions such as Bandwidth awareness: Monitor client I/O and allocate the leftover to internal work dynamically instead of static configuration. Prioritization: Give priority to operations near the top of the tree (flushes and L0 → L1 compaction). Slowdowns there create backpressure that impacts tail latency more than work at deeper levels. Last but not least, what you implemented this week is called level compaction. Other strategies like tiered compaction exist, which merge SSTables based on their size and count rather than fixed levels. You can explore this great resource from Mark Callaghan, which dives deeper into the design trade-offs and performance characteristics of different compaction strategies in LSM trees. Missing direction in your tech career? At The Coder Cafe, we serve timeless concepts with your coffee to help you master the fundamentals. Written by a Google SWE and trusted by thousands of readers, we support your growth as an engineer, one coffee at a time. ❤️ If you enjoyed this post, please hit the like button. Week 0: Introduction Week 1: In-Memory Store Week 2: LSM Tree Foundations Week 3: Durability with Write-Ahead Logging Week 4: Deletes, Tombstones, and Compaction Week 5: Leveling and Key-Range Partitioning Last week, you implemented deletion and compaction, making sure the LSM tree wouldn’t grow indefinitely. Still, there’s a weak spot: in the worst-case scenario (e.g., on a key miss), a single read has to scan all SSTables. To address this, you will implement leveling, a core idea in LSM trees. Instead of a single flat list of SSTables, leveling stores data across multiple levels: , , , etc. gets compacted to and makes space for future memtable flushes. gets compacted to and makes space for compaction. gets compacted to and makes space for compaction. gets compacted to and makes space for compaction. This process is called level compaction. Something important to understand: is slightly different from all the other levels. is created during memtable flushes. If a key already exists at and also in the memtable, the next flush can write that key again to a new file. In other words, can have overlapping keys. For all the other levels ( to ), that’s not the case. They are created by compaction, which removes duplicates and produces non-overlapping key ranges. In this week’s simplified design, an to compaction takes all SSTables from and , performs a k-way merge, then rewrites fully. As a result, each key appears at most once per level from downward. What’s the consequence of non-overlapping keys? You can improve lookups using a simple range-to-file mapping, for example: Keys from to are stored in this SSTable. Keys from to are stored in this SSTable. Limit the number of levels to two: , which may contain overlapping keys. , no overlapping keys. Create a folder for each level: , and . Keep one global file at the root. remains a simple list of SSTables. allows key-range partitioning. is composed of three SSTables: . : Keys between (included) and (excluded) live in . Keys between (included) and (excluded) live in . Keys between (included) and (excluded) live in . Open iterators on all and SSTables. Apply the k-way merge algorithm: Comparator: Primary: . Tie-break (equal ): Prefer over . At , prefer the newest SSTable. Version order: any record from is newer than records from . Within , newer files win (same as week 4). Keep at most one record per key (newest wins). Tombstones: because is the bottom level, drop a tombstone if no older value for that key remains in the merge result. Create new L1 SSTables with at most 2,000 entries. When naming new L1 files, make sure they are unique. For example, if contains and , the first SSTable file created should be . Publish atomically: each new file the directory. Update the atomically. the file. the root directory (the directory containing the file and and folders). Clean up: Delete obsolete L1 files, then . Delete all files in , then . Check the memtable. If not found, scan all files newest to oldest using section of the . If not found at : Use the section of the to choose the one shard that contains the key’s range, then read only that L1 file. Return the value if found; otherwise, return . Define a size ratio so each level has a target size larger than the previous one. Keep one directory per level: , , …, . Keep a single global . When a level reaches its max number of SSTables (derived from the size ratio), compact that level into the next. Only drop tombstones at the bottommost level . At any intermediate level with , propagate the tombstone downward during compaction. Return all keys between (included) and (excluded). Use put-delete-scan.txt to validate that your changes are correct. It introduces the keyword. For example: This line means: between (included) and (excluded), the keys are , , (the output will always be sorted) The key exists at L3. The key doesn’t exist at L2. The key is updated at L1. A tombstone is placed at L0. This is one of the reasons why the RUM conjecture we introduced last week is important. If you compact too eagerly, you burn a lot of disk I/O and lose the LSM’s write advantage. If you compact too lazily, you incur a penalty on your read path. If you compact everything all the time, you incur a space-amplification penalty during compaction roughly equal to the working set size. Bandwidth awareness: Monitor client I/O and allocate the leftover to internal work dynamically instead of static configuration. Prioritization: Give priority to operations near the top of the tree (flushes and L0 → L1 compaction). Slowdowns there create backpressure that impacts tail latency more than work at deeper levels.