Implementing Cache Invalidation with Time-To-Live (TTL) in Go
Caching is a crucial technique for optimizing application performance by storing frequently accessed data in a readily available location. However, cached data can become stale, leading to incorrect results. This challenge focuses on building a simple in-memory cache with TTL-based invalidation in Go, ensuring data freshness while maintaining performance benefits.
Problem Description
You are tasked with creating a Cache struct in Go that stores key-value pairs. Each entry in the cache should have a Time-To-Live (TTL) associated with it, indicating how long the entry remains valid before it's considered expired. The cache should provide methods for:
Get(key string) (interface{}, bool): Retrieves a value associated with a given key. Returns the value and a boolean indicating whether the key exists and is not expired. If the key doesn't exist or is expired, returnsnilandfalse.Set(key string, value interface{}, ttl time.Duration): Adds a new key-value pair to the cache with the specified TTL. If the key already exists, its value and TTL are updated.Delete(key string): Removes a key-value pair from the cache.Cleanup(): Iterates through the cache and removes any expired entries. This should be called periodically to keep the cache size manageable.
The cache should use time.Duration for TTLs and time.Now() for determining expiration times. The cache should be thread-safe.
Examples
Example 1:
Input:
cache := NewCache()
cache.Set("user1", "John Doe", 5 * time.Second)
time.Sleep(3 * time.Second)
value, ok := cache.Get("user1")
fmt.Println(value, ok)
time.Sleep(3 * time.Second)
value, ok = cache.Get("user1")
fmt.Println(value, ok)
Output:
John Doe true
<nil> false
Explanation: The first Get retrieves "John Doe" because the TTL is still valid. The second Get returns nil and false because the TTL has expired.
Example 2:
Input:
cache := NewCache()
cache.Set("product1", 100, 10 * time.Second)
cache.Set("product2", 200, 15 * time.Second)
cache.Delete("product1")
value, ok := cache.Get("product1")
fmt.Println(value, ok)
value, ok = cache.Get("product2")
fmt.Println(value, ok)
Output:
<nil> false
200 true
Explanation: "product1" is deleted, so Get("product1") returns nil and false. "product2" remains in the cache and is accessible.
Example 3: (Edge Case - Concurrent Access)
Input: (Multiple goroutines accessing the cache concurrently)
// Goroutine 1: cache.Set("key1", "value1", 2 * time.Second)
// Goroutine 2: cache.Get("key1")
// Goroutine 3: time.Sleep(1 * time.Second)
// Goroutine 4: cache.Get("key1")
Output: (The output will depend on the timing, but should be consistent given the same timing)
value1 true (from Goroutine 2)
value1 true (from Goroutine 4)
Explanation: The cache must handle concurrent access safely, ensuring that reads and writes don't lead to data corruption or race conditions.
Constraints
- The cache should be in-memory.
- TTLs must be represented as
time.Duration. - The cache should be thread-safe. Use appropriate synchronization primitives (e.g.,
sync.Mutex). - The
Cleanup()method should remove all expired entries. - The maximum number of entries in the cache is not explicitly limited, but consider potential memory usage implications for very large caches.
- The
interface{}type is used to allow storing values of any type.
Notes
- Consider using a
mapto store the cache entries. - The
Cleanup()method should be called periodically (e.g., using a goroutine with a timer) to prevent the cache from growing indefinitely. - Think about how to handle potential race conditions when multiple goroutines access the cache concurrently.
sync.Mutexis a good starting point. - The
time.Now()function should be used to calculate expiration times. - Focus on correctness and thread safety first, then consider performance optimizations if needed. A simple implementation is preferred over an overly complex one.
- The
NewCache()function should initialize and return a newCacheinstance.