githubEdit

10. Design LRU Cache

Difficulty: Medium Topics: Data Structures, Doubly Linked List, HashMap Key Concepts: O(1) Get/Put, Eviction Policy, Generics.

Phase 1: Requirements Gathering

Goals

  • Design a data structure that follows Least Recently Used (LRU) eviction policy.

  • Support Get and Put operations in O(1) time complexity.

1. Who are the actors?

  • Client: Application or System accessing the cache.

  • Cache: Stores key-value pairs and manages eviction.

2. What are the must-have features? (Core)

  • Capacity: Fixed size limit.

  • Get(key): Return value if exists (and update usage history), else -1 (or null).

  • Put(key, value): Insert or Update value. If full, remove the least recently used item.

3. What are the constraints?

  • Performance: All operations must be O(1) on average.

  • Thread Safety: Optional, but good to discuss (Synchronized vs Concurrent).


Phase 2: Use Cases

UC1: Get Key

Actor: Client Flow:

  1. Client requests Get(Key).

  2. Cache checks HashMap.

  3. If Hit:

    • Move corresponding Node to Head (Generic "Recently Used" position).

    • Return Value.

  4. If Miss:

    • Return -1/Null.

UC2: Put Key-Value

Actor: Client Flow:

  1. Client requests Put(Key, Value).

  2. Cache checks HashMap.

  3. If Exists:

    • Update Node value.

    • Move Node to Head.

  4. If New:

    • If Capacity is full:

      • Remove Tail (Least Recently Used).

      • Remove from HashMap.

    • Create New Node.

    • Add to Head.

    • Add to HashMap.


Phase 3: Class Diagram

Step 1: Core Entities

  • LRUCache: Main container.

  • Node: Doubly Linked List node (stores Key, Value, Prev, Next).

  • HashMap: Maps Key -> Node for O(1) access.

UML Diagram

spinner

Phase 4: Design Patterns

1. Composition

  • Description: A design principle where a class is composed of one or more objects of other classes, rather than inheriting from them.

  • Why used: The LRUCache combines a HashMap (for O(1) lookup) and a DoublyLinkedList (for O(1) updates to ordering). By composing these two structures, we get the benefits of both to satisfy the LRU constraints.

2. Decorator Pattern

  • Description: Attaches additional responsibilities to an object dynamically.

  • Why used: (Optional) One could decorate a standard Map interface to add eviction policies (LRU, LFU) transparently, allowing the client to use it just like a regular Map.


Phase 5: Code Key Methods

Java Implementation


Phase 6: Discussion

Concurrency (SDE-3 Concept)

Q: How to make this highly concurrent without a massive bottleneck?

  • A: "The simple approach uses a global synchronized lock, which serializes all cache access. For high concurrency:

    1. Lock Striping (like ConcurrentHashMap): Create an array of N segment locks (e.g., 16). Hash the key to determine which segment it belongs to and only lock that segment. Each segment maintains its own independent DoublyLinkedList and LRU capacity ($Total Capacity / N$).

    2. Non-Blocking Algorithms: Use atomic references and compareAndSet, though maintaining a strict LRU order becomes extremely complex without locking."

Built-in Alternatives

Q: Does Java have this?

  • A: "Yes, LinkedHashMap with accessOrder = true. You can override removeEldestEntry to enforce capacity."

Expiration

Q: How to add Time-To-Live (TTL)?

  • A: "Add a timestamp field to Node. On get(K), check if (now - node.time > TTL). If expired, remove node and return null. Also need a background cleaner thread for passive expiration."


SOLID Principles Checklist

  • S (Single Responsibility): LRUCache manages mapping and eviction.

  • O (Open/Closed): Hard to extend eviction policy with this specific implementation (it's tightly coupled to LRU logic). Strategy pattern could allow swapping policies (LFU, FIFO).

  • L (Liskov Substitution): Generic K, V allows substitution of types.

  • I (Interface Segregation): Cache interface could expose get/put.

  • D (Dependency Inversion): Not heavily used, but could depend on Map interface.

Last updated