10. Design LRU Cache
Difficulty: Medium Topics: Data Structures, Doubly Linked List, HashMap Key Concepts: O(1) Get/Put, Eviction Policy, Generics.
Phase 1: Requirements Gathering
Goals
Design a data structure that follows Least Recently Used (LRU) eviction policy.
Support
GetandPutoperations in O(1) time complexity.
1. Who are the actors?
Client: Application or System accessing the cache.
Cache: Stores key-value pairs and manages eviction.
2. What are the must-have features? (Core)
Capacity: Fixed size limit.
Get(key): Return value if exists (and update usage history), else -1 (or null).
Put(key, value): Insert or Update value. If full, remove the least recently used item.
3. What are the constraints?
Performance: All operations must be O(1) on average.
Thread Safety: Optional, but good to discuss (Synchronized vs Concurrent).
Phase 2: Use Cases
UC1: Get Key
Actor: Client Flow:
Client requests
Get(Key).Cache checks HashMap.
If Hit:
Move corresponding Node to Head (Generic "Recently Used" position).
Return Value.
If Miss:
Return -1/Null.
UC2: Put Key-Value
Actor: Client Flow:
Client requests
Put(Key, Value).Cache checks HashMap.
If Exists:
Update Node value.
Move Node to Head.
If New:
If Capacity is full:
Remove Tail (Least Recently Used).
Remove from HashMap.
Create New Node.
Add to Head.
Add to HashMap.
Phase 3: Class Diagram
Step 1: Core Entities
LRUCache: Main container.
Node: Doubly Linked List node (stores Key, Value, Prev, Next).
HashMap: Maps Key -> Node for O(1) access.
UML Diagram
Phase 4: Design Patterns
1. Composition
Description: A design principle where a class is composed of one or more objects of other classes, rather than inheriting from them.
Why used: The
LRUCachecombines aHashMap(for O(1) lookup) and aDoublyLinkedList(for O(1) updates to ordering). By composing these two structures, we get the benefits of both to satisfy the LRU constraints.
2. Decorator Pattern
Description: Attaches additional responsibilities to an object dynamically.
Why used: (Optional) One could decorate a standard
Mapinterface to add eviction policies (LRU, LFU) transparently, allowing the client to use it just like a regular Map.
Phase 5: Code Key Methods
Java Implementation
Phase 6: Discussion
Concurrency (SDE-3 Concept)
Q: How to make this highly concurrent without a massive bottleneck?
A: "The simple approach uses a global
synchronizedlock, which serializes all cache access. For high concurrency:Lock Striping (like
ConcurrentHashMap): Create an array ofNsegment locks (e.g., 16). Hash the key to determine which segment it belongs to and only lock that segment. Each segment maintains its own independentDoublyLinkedListand LRU capacity ($Total Capacity / N$).Non-Blocking Algorithms: Use atomic references and
compareAndSet, though maintaining a strict LRU order becomes extremely complex without locking."
Built-in Alternatives
Q: Does Java have this?
A: "Yes,
LinkedHashMapwithaccessOrder = true. You can overrideremoveEldestEntryto enforce capacity."
Expiration
Q: How to add Time-To-Live (TTL)?
A: "Add a
timestampfield toNode. Onget(K), checkif (now - node.time > TTL). If expired, remove node and return null. Also need a background cleaner thread for passive expiration."
SOLID Principles Checklist
S (Single Responsibility):
LRUCachemanages mapping and eviction.O (Open/Closed): Hard to extend eviction policy with this specific implementation (it's tightly coupled to LRU logic). Strategy pattern could allow swapping policies (LFU, FIFO).
L (Liskov Substitution): Generic K, V allows substitution of types.
I (Interface Segregation):
Cacheinterface could exposeget/put.D (Dependency Inversion): Not heavily used, but could depend on
Mapinterface.
Last updated