LeetCode 460: LFU Cache Solution in Python – A Step-by-Step Guide

Imagine you’re a librarian managing a small shelf with limited space—like 2 slots—for books, where readers request titles (get) and you add new ones (put). When the shelf’s full, you remove the least frequently borrowed book, and if there’s a tie, the least recently used among them goes. That’s the clever challenge of LeetCode 460: LFU Cache, a hard-level problem that’s a deep dive into data structure design. Using Python, we’ll solve it two ways: the Best Solution, a doubly linked list with frequency tracking that’s fast and O(1), and an Alternative Solution, a hash map with list simulation that’s simpler but slower. With examples, code breakdowns, and a friendly tone, this guide will help you manage that cache—whether you’re new to hard problems or shelving your skills. Let’s check out some books and dive in!

What Is LeetCode 460: LFU Cache?

Section link icon

In LeetCode 460: LFU Cache, you need to design a Least Frequently Used (LFU) cache with two operations: get(key) returns the value if it exists (incrementing its frequency) or -1 if not, and put(key, value) adds or updates the key-value pair, evicting the least frequently used item if the cache is full (using least recent use as a tiebreaker). For example, with capacity 2, putting [1,1], [2,2], getting 1, then putting [3,3] evicts 2 (freq 1) over 1 (freq 2). It’s like running a tiny library with a strict borrowing policy.

Problem Statement

  • Input:
    • LFUCache(capacity)—initialize with capacity.
    • get(key)—get value, return -1 if not found.
    • put(key, value)—add or update key-value pair.
  • Output:
    • get: int—value or -1.
    • put: None.
  • Rules:
    • O(1) time for get and put.
    • Evict least frequently used; if tied, least recently used.

Constraints

  • 0 <= capacity <= 10^4.
  • 0 <= key <= 10^5.
  • 0 <= value <= 10^9.
  • At most 2 * 10^5 calls to get and put.

Examples to Get Us Started

  • Input: ["LFUCache","put","put","get","put","get","get","put","get","get","get"], [[2],[1,1],[2,2],[1],[3,3],[2],[3],[4,4],[1],[3],[4]]
    • Output: [null,null,null,1,null,-1,3,null,1,3,4]
      • Capacity 2: [1,1], [2,2], get 1 (freq 2), [3,3] evicts 2, etc.
  • Input: ["LFUCache","put","get"], [[0],[1,1],[1]]
    • Output: [null,null,-1] (Capacity 0, no storage).

Understanding the Problem: Managing the Shelf

Section link icon

To solve LeetCode 460: LFU Cache in Python, we need a data structure supporting O(1) get and put, tracking frequency and recency, and evicting the least frequently used (LFU) item with LRU tiebreaking. A simple hash map is O(n) for eviction; we need something smarter. We’ll explore:

  • Best Solution (Doubly Linked List with Frequency Tracking): O(1) time, O(capacity) space—optimal and complex.
  • Alternative Solution (Hash Map with List): O(n) time, O(capacity) space—simpler but slower.

Let’s dive into the best solution—it’s the librarian’s perfect catalog we need.

Best Solution: Doubly Linked List with Frequency Tracking

Section link icon

Why This Is the Best Solution

The doubly linked list with frequency tracking is the top pick because it achieves O(1) time for both get and put, using nested doubly linked lists: one for frequency levels and one for keys within each frequency, with hash maps for instant access. It’s like organizing books by borrowing frequency on shelves, with each shelf linked for quick swaps—elegant and lightning-fast!

How It Works

Here’s the strategy:

  • Structures:
    • Node: key, value, frequency, pointers (prev, next).
    • DLinkedList: doubly linked list for nodes at same frequency.
    • freq_map: {frequency: DLinkedList}.
    • key_map: {key: Node}.
  • Get:
    • Lookup key in key_map, return -1 if not found.
    • Increment node’s frequency, move to next freq list.
    • Update min_freq if needed.
  • Put:
    • If key exists, update value, increment freq.
    • If full, evict min_freq’s LRU node.
    • Add new node, update freq_map and min_freq.
  • Why It Works:
    • O(1) access via hash maps.
    • Doubly linked lists enable O(1) insertion/removal.

Step-by-Step Example

Example: capacity = 2, Operations: put(1,1), put(2,2), get(1), put(3,3)

  • Initialize: capacity=2, freq_map={}, key_map={}, min_freq=0.
  • put(1,1):
    • key_map = {1:Node(1,1,1)}, freq_map = {1:DLL(1)}, min_freq = 1.
  • put(2,2):
    • key_map = {1:Node(1,1,1), 2:Node(2,2,1)}, freq_map = {1:DLL(1,2)}.
  • get(1):
    • Node(1,1,1) → Node(1,1,2), freq_map = {1:DLL(2), 2:DLL(1)}, min_freq = 1, return 1.
  • put(3,3):
    • Full: evict min_freq=1, LRU=2.
    • key_map = {1:Node(1,1,2), 3:Node(3,3,1)}, freq_map = {1:DLL(3), 2:DLL(1)}, min_freq = 1.

Code with Detailed Line-by-Line Explanation

Here’s the Python code, broken down clearly:

class Node:
    def __init__(self, key=0, value=0, freq=0):
        self.key = key
        self.value = value
        self.freq = freq
        self.prev = None
        self.next = None

class DLinkedList:
    def __init__(self):
        self.head = Node()  # Dummy head
        self.tail = Node()  # Dummy tail
        self.head.next = self.tail
        self.tail.prev = self.head
        self.size = 0

    def add_node(self, node):
        node.prev = self.head
        node.next = self.head.next
        self.head.next.prev = node
        self.head.next = node
        self.size += 1

    def remove_node(self, node):
        node.prev.next = node.next
        node.next.prev = node.prev
        self.size -= 1

    def pop_tail(self):
        if self.size == 0:
            return None
        node = self.tail.prev
        self.remove_node(node)
        return node

class LFUCache:
    def __init__(self, capacity: int):
        self.capacity = capacity
        self.size = 0
        self.min_freq = 0
        self.key_map = {}  # key: Node
        self.freq_map = {}  # freq: DLinkedList

    def get(self, key: int) -> int:
        if key not in self.key_map:
            return -1
        node = self.key_map[key]
        self._update_freq(node)
        return node.value

    def put(self, key: int, value: int) -> None:
        if self.capacity == 0:
            return

        if key in self.key_map:
            node = self.key_map[key]
            node.value = value
            self._update_freq(node)
        else:
            if self.size >= self.capacity:
                # Evict LFU, LRU
                lru_list = self.freq_map[self.min_freq]
                lru_node = lru_list.pop_tail()
                del self.key_map[lru_node.key]
                self.size -= 1
                if lru_list.size == 0:
                    del self.freq_map[self.min_freq]

            # Add new node
            new_node = Node(key, value, 1)
            self.key_map[key] = new_node
            if 1 not in self.freq_map:
                self.freq_map[1] = DLinkedList()
            self.freq_map[1].add_node(new_node)
            self.size += 1
            self.min_freq = 1

    def _update_freq(self, node):
        # Remove from old frequency list
        old_freq = node.freq
        self.freq_map[old_freq].remove_node(node)
        if self.freq_map[old_freq].size == 0:
            del self.freq_map[old_freq]
            if self.min_freq == old_freq:
                self.min_freq += 1

        # Add to new frequency list
        node.freq += 1
        if node.freq not in self.freq_map:
            self.freq_map[node.freq] = DLinkedList()
        self.freq_map[node.freq].add_node(node)
  • Node: Key, value, freq, prev/next pointers.
  • DLinkedList: O(1) add/remove with dummy nodes.
  • LFUCache:
    • Init: Set capacity, maps, min_freq.
    • Get: Lookup, update freq, return value.
    • Put: Update if exists, else evict LFU/LRU, add new.
    • _update_freq: Move node to new freq list, adjust min_freq.
  • Time Complexity: O(1)—all operations.
  • Space Complexity: O(capacity)—maps and lists.

It’s like a librarian’s dream organizer!

Alternative Solution: Hash Map with List Simulation

Section link icon

Why an Alternative Approach?

The hash map with list simulation uses a simpler structure—O(n) time, O(capacity) space—tracking keys, values, frequencies, and order in lists, but eviction is slower. It’s like a basic logbook, scanning for the LFU item—easier but less efficient.

How It Works

  • Step 1: Use hash map for key-value, freq, and list for order.
  • Step 2: Get: Lookup, increment freq, update order.
  • Step 3: Put: Add/update, evict LFU by scanning if full.

Code (Simplified)

class LFUCache:
    def __init__(self, capacity: int):
        self.capacity = capacity
        self.cache = {}  # key: (value, freq)
        self.order = []  # (key, timestamp)
        self.time = 0

    def get(self, key: int) -> int:
        if key not in self.cache:
            return -1
        val, freq = self.cache[key]
        self.cache[key] = (val, freq + 1)
        self.order.append((key, self.time))
        self.time += 1
        return val

    def put(self, key: int, value: int) -> None:
        if self.capacity == 0:
            return
        if key in self.cache:
            _, freq = self.cache[key]
            self.cache[key] = (value, freq + 1)
        else:
            if len(self.cache) >= self.capacity:
                # Find LFU, then LRU
                lfu_key = min(self.cache, key=lambda k: (self.cache[k][1], next(t for k_, t in reversed(self.order) if k_ == k)))
                del self.cache[lfu_key]
                self.order = [(k, t) for k, t in self.order if k != lfu_key]
            self.cache[key] = (value, 1)
        self.order.append((key, self.time))
        self.time += 1
  • Time Complexity: O(n)—eviction scan.
  • Space Complexity: O(capacity)—maps and list.

It’s a slow log keeper!

Comparing the Two Solutions

Section link icon
  • Doubly Linked List (Best):
    • Pros: O(1), optimal.
    • Cons: Complex.
  • Hash Map List (Alternative):
    • Pros: O(n), simpler.
    • Cons: Slow eviction.

Doubly linked wins for speed.

Edge Cases and Examples

Section link icon
  • Capacity=0: All puts ignored.
  • Single key: No eviction needed.
  • All same freq: LRU applies.

Doubly linked handles all perfectly.

Complexity Recap

Section link icon
  • Doubly Linked: Time O(1), Space O(capacity).
  • Hash Map List: Time O(n), Space O(capacity).

Doubly linked’s the champ.

Key Takeaways

Section link icon
  • Doubly Linked: O(1) mastery.
  • Hash Map List: Simpler trade-off.
  • Python Tip: Lists link fast—see [Python Basics](/python/basics).

Final Thoughts: Master That Cache

Section link icon

LeetCode 460: LFU Cache in Python is a hard-hitting design challenge. The doubly linked list solution is your O(1) librarian, while the hash map list is a slow clerk. Want more cache fun? Try LeetCode 146: LRU Cache or LeetCode 432: All O`one Data Structure. Ready to shelve some keys? Head to Solve LeetCode 460 on LeetCode and cache it today—your coding skills are fully stocked!