Monstra
High-performance Swift framework for task execution, memory caching, and data management with intelligent execution merging, TTL caching, and retry logic
Install / Use
/learn @yangchenlarkin/MonstraREADME
English | 简体中文
A high-performance Swift framework providing efficient task execution, memory caching, and data management utilities with intelligent execution merging, TTL caching, and retry logic.
Documentation: <a href="https://yangchenlarkin.github.io/Monstra/" target="_blank" rel="noopener noreferrer">API Reference (Jazzy)</a>
🚀 Features
Monstore - Caching System
MemoryCache
- ⏰ TTL & Priority Support: Advanced time-to-live functionality with automatic expiration and configurable priority-based eviction
- 💥 Avalanche Protection: Intelligent TTL randomization prevents cache stampede and simultaneous expiration cascades
- 🛡️ Breakdown Protection: Comprehensive null value caching and robust key validation for enhanced reliability
- 📊 Statistics & Monitoring: Built-in cache statistics, performance metrics, and real-time monitoring capabilities
Monstask - Task Execution Framework
MonoTask
- 🔄 Execution Merging: Multiple concurrent requests merged into single execution
- ⏱️ TTL Caching: Results cached for configurable duration with automatic expiration
- 🔄 Advanced Retry Logic: Exponential backoff, fixed intervals, and hybrid retry strategies
- 🎯 Manual Cache Control: Fine-grained cache invalidation with execution strategy options
KVLightTasksManager
- 📈 Peak Shaving: Prevents excessive task execution volume through Priority-Based Scheduling (LIFO/FIFO strategies with configurable limits)
- 🔄 Batch Processing: Support for single and batch data provisioning to enhance backend execution efficiency
- 📊 Concurrent Execution: Configurable concurrent task limits (default: 4 running, 256 queued)
- 🎯 Execution Merging: Intelligent request deduplication and merging to prevent duplicate work and optimize resource usage
- 💾 Result Caching: Integrated MemoryCache for optimized performance
KVHeavyTasksManager
- 📊 Progress Tracking: Real-time progress updates with custom event publishing and broadcasting capabilities
- 🎯 Priority-Based Scheduling: Advanced LIFO/FIFO strategies with intelligent interruption support
- 🔄 Task Lifecycle Management: Complete start/stop/resume functionality with provider state preservation
- 📱 Concurrent Control: Optimized concurrent execution limits (default: 2 running, 64 queued)
- 🎯 Execution Merging: Intelligent request deduplication and merging to prevent duplicate work and optimize resource usage
- 💾 Result Caching: Integrated MemoryCache for enhanced performance and efficiency
🚀 Quick Start
Installation
Swift Package Manager (Recommended)
Add Monstra to your Package.swift:
dependencies: [
.package(url: "https://github.com/yangchenlarkin/Monstra.git", from: "0.1.0")
]
Or add it directly in Xcode:
- File → Add Package Dependencies
- Enter the repository URL:
https://github.com/yangchenlarkin/Monstra.git - Select the version you want to use
CocoaPods
Add Monstra to your Podfile:
pod 'Monstra', '~> 0.1.0'
Note: Monstra is published as a unified framework, so you get all components together.
🎯 When to Use Each Component
| Component | Best For | Key Features | |-----------|----------|-------------| | MonoTask | Single expensive operations | Execution merging, TTL caching, retry logic | | KVLightTasksManager | Fast, lightweight operations | Batch processing, key validation, high throughput | | KVHeavyTasksManager | Resource-intensive operations | Progress tracking, lifecycle management, error recovery |
Some Scenarios for Each Component
- MonoTask: API calls, database queries, expensive computations that benefit from caching and deduplication
- KVLightTasksManager: User profile fetching, search results, configuration loading, high-frequency operations
- KVHeavyTasksManager: File downloads, video processing, ML inference, long-running operations with progress updates
💡 Simple Examples
1. MemoryCache
Basic caching operations with TTL, priority-based and LRU eviction.
Simple Example (Default Configuration):
import Monstra
// Create a basic cache with default configuration
let cache = MemoryCache<String, Int>()
// Set values with different priorities and TTL
cache.set(element: 42, for: "answer", priority: 10.0, expiredIn: 3600.0) // 1 hour, high priority
cache.set(element: 100, for: "score", priority: 1.0) // Default TTL, low priority
cache.set(element: nil, for: "user-999") // Cache null value
// Get values using the FetchResult enum
switch cache.getElement(for: "answer") {
case .hitNonNullElement(let value):
print("Found answer: \(value)")
case .hitNullElement:
print("Found null value")
case .miss:
print("Key not found or expired")
case .invalidKey:
print("Invalid key")
}
// Check cache status
print("Cache count: \(cache.count)")
print("Cache capacity: \(cache.capacity)")
print("Is empty: \(cache.isEmpty)")
print("Is full: \(cache.isFull)")
// Remove specific element
let removed = cache.removeElement(for: "score")
print("Removed: \(removed ?? -1)")
// Clean up expired elements
cache.removeExpiredElements()
Detailed Configuration Example:
// Advanced configuration with all options
let imageCache = MemoryCache<String, Data>(
configuration: .init(
// Thread Safety: Enable DispatchSemaphore synchronization for concurrent access
enableThreadSynchronization: true,
// Memory & Capacity Limits: Maximum 100 items, 50MB memory usage
memoryUsageLimitation: .init(
capacity: 100, // Maximum number of cached items
memory: 50 // Maximum memory usage in MB
),
// TTL Settings: How long items stay in cache
defaultTTL: 1800.0, // 30 minutes for regular elements
defaultTTLForNullElement: 300.0, // 5 minutes for null/nil elements
// Cache Stampede Prevention: Randomize TTL by ±30 seconds
ttlRandomizationRange: 30.0, // Prevents all items expiring simultaneously
// Key Validation: Only accept keys starting with "img_"
keyValidator: { key in
return key.hasPrefix("img_") // Custom validation logic
},
// Memory Cost Calculation: Use actual data size for eviction decisions
costProvider: { data in
return data.count // Return size in bytes
}
)
)
2. MonoTask
Single Task Execution & Merging: Handles individual task execution, request merging and result cache, such as module initialization, configuration file reading, and API call consolidation with result caching (e.g., UserProfile, e-commerce Cart operations)
Simple Example (Default Configuration):
import Monstra
// Create a basic task with minimal configuration
let networkTask = MonoTask<Data> { callback in
// Your network request logic here
let url = URL(string: "https://api.example.com/data")!
URLSession.shared.dataTask(with: url) { data, response, error in
if let error = error {
callback(.failure(error))
} else if let data = data {
callback(.success(data))
}
}.resume()
}
// Alternatively, you can use an asynchornic block to create MonoTask
// Multiple execution patterns - only one network request
// Note: All executions benefit from MonoTask's execution merging
// Execute with async/await
let result1: Result<Data, Error> = await networkTask.asyncExecute()
switch result1 {
case .success(let data):
print("Got data: \(data.count) bytes")
case .failure(let error):
print("Error: \(error)")
}
// Execute with async/await and try/catch
do {
let result2: Data = try await networkTask.executeThrows() // Second execution, returns cached result
print("Result2: \(result2)")
} catch {
print("Result2 error: \(error)")
}
// Fire-and-forget execution
networkTask.justExecute()
// Callback-based execution
networkTask.execute { result in
switch result {
case .success(let data):
print("Result3 (callback): \(data.count) bytes")
case .failure(let error):
print("Result3 (callback) error: \(error)")
}
}
Detailed Configuration Example:
// Advanced configuration with custom retry and queue settings
let fileProcessor1 = MonoTask<ProcessedData>(
retry: 3, // Simple retry count configuration
// Result Caching: Use default cache configuration
resultExpireDuration: 300.0, // 5 minutes cache duration
// Task Queue: Custom dispatch queue for task execution
taskQueue: DispatchQueue.global(qos: .utility), // Background priority queue
// Callback Queue: Custom dispatch queue for callbacks
callbackQueue: DispatchQueue.global(qos: .userInitiated) // High priority queue
) { callback in
// Your file processing logic here
let filePath = "/path/to/large/file.txt"
do {
let data = try Data(contentsOf: URL(fileURLWithPath: filePath))
let processedData = ProcessedData(content: data, metadata: ["size": data.count])
callback(.succ
