Database queries are the usual suspects when your Laravel app starts feeling sluggish. Every time a user loads a page, your application might be hitting the database multiple times to fetch the same data. This repetitive work wastes server resources and slows down response times.
Caching solves this by storing frequently accessed data in a fast-access layer. While Redis and Memcached are popular choices, there's an often-overlooked alternative: MongoDB itself. If you're already using MongoDB as your database, why add another service to your stack?
With the official mongodb/laravel-mongodb package (version 5.5.0 as of 2025), you can use MongoDB as your cache store with native support for TTL indexes that automatically clean up expired cache entries. This means fewer moving parts in your infrastructure while still getting excellent caching performance.
Prerequisites
Before you start, make sure you have:
• PHP 8.1 or higher with the MongoDB PHP extension installed.
• Laravel 11.x or 12.x—the latest Laravel MongoDB integration supports both.
• MongoDB 4.4+ running locally or a MongoDB Atlas cluster.
• Composer for package management.
If you're using Laravel Herd or installed PHP via php.new, the MongoDB extension is already available. You can verify it's installed by running:
php –ri mongodb
You should see output showing the MongoDB extension version and configuration.
Environment setup
First, let's confirm your MongoDB extension is properly installed and enabled in both CLI and web server configurations:
php –ri mongodb
If the command doesn't return MongoDB extension information, you'll need to install it:
pecl install mongodb
Then, add extension=mongodb.so (Linux/Mac) or extension=mongodb.dll (Windows) to your php.ini file.
Install the MongoDB Laravel package
The mongodb/laravel-mongodb package is officially maintained by MongoDB and provides the cache driver along with Eloquent support and query builder extensions.
Install it via Composer:
composer require mongodb/laravel-mongodb
Laravel 12 support was added in version 5.2.0, released in March 2025. The package automatically registers the service provider, so you don't need manual configuration in Laravel 11+.
Configure MongoDB as cache driver
Setting up MongoDB caching requires configuration in two places: your database connection and cache store.
Database connection setup
Open config/database.php and add a MongoDB connection:
'connections' => [ *// ... existing connections* 'mongodb' => [ 'driver' => 'mongodb', 'dsn' => env('MONGODB_URI', 'mongodb://localhost:27017'), 'database' => env('MONGODB_DATABASE', 'laravel'), ],],
Now, update your .env file with your MongoDB connection details:
MONGODB_URI=mongodb://localhost:27017MONGODB_DATABASE=laravel
For MongoDB Atlas, your connection string looks like this:
MONGODB_URI=mongodb+srv://username:password@cluster.mongodb.net/?retryWrites=true&w=majorityMONGODB_DATABASE=laravel
Cache store configuration
Open config/cache.php and add the MongoDB store configuration:
'stores' => [ *// ... existing stores* 'mongodb' => [ 'driver' => 'mongodb', 'connection' => 'mongodb', 'collection' => 'cache', 'lock_connection' => 'mongodb', 'lock_collection' => 'cache_locks', ],],
Set MongoDB as your default cache driver by updating .env:
CACHE_STORE=mongodb
Alternatively, update the default in config/cache.php:
'default' => env('CACHE_STORE', 'mongodb'),
Setting up TTL indexes
Time to live (TTL) indexes are MongoDB's automatic document expiration feature. They're optional but highly recommended because they:
-
Automatically remove expired cache entries—no application code needed.
-
Improve performance—MongoDB handles cleanup in the background.
-
Save storage space—your cache collection stays lean.
MongoDB's background thread checks for expired documents every 60 seconds and removes them in batches of up to 50,000 documents per index.
Important TTL index requirements:
- TTL indexes work on single-field indexes only.
- The indexed field must contain date values or arrays of date values.
- MongoDB stores dates as BSON date types, not strings.
Create a migration to set up TTL indexes for both cache and locks:
php artisan make:migration create_mongodb_cache_indexes
Open the newly created migration file and add:
**<?php use Illuminate\Database\Migrations\Migration;use Illuminate\Support\Facades\Cache;use MongoDB\Laravel\Cache\MongoStore;use MongoDB\Laravel\Cache\MongoLock; return new class extends Migration{ public function up(): void { // Create TTL index for cache store $store = Cache::store('mongodb')->getStore(); if ($store instanceof MongoStore) { $store->createTTLIndex(); } // Create TTL index for cache locks $lock = Cache::store('mongodb')->lock('setup_lock'); if ($lock instanceof MongoLock) { $lock->createTTLIndex(); } } public function down(): void { // TTL indexes can be manually dropped if needed: // Connect to MongoDB and run: // db.cache.dropIndex("expires_at_1") // db.cache_locks.dropIndex("expires_at_1") }};
Run the migration:
php artisan migrate
The createTTLIndex() method creates an index on the expires_at field, telling MongoDB to automatically delete documents when their expiration time is reached.
Basic cache usage
Laravel's cache facade provides a consistent API regardless of your cache backend. Here are the essential operations.
Storing and retrieving data
Store a value in the cache with an expiration time in seconds:
use Illuminate\Support\Facades\Cache; // Store for 60 secondsCache::put('user_count', 1500, 60); // Store for 1 hour using CarbonCache::put('dashboard_stats', $stats, now()->addHour()); // Retrieve a value$count = Cache::get('user_count'); // Retrieve with a default if not found$count = Cache::get('user_count', 0); // Check if a key existsif (Cache::has('user_count')) {// Key exists and hasn't expired}// Remove a specific itemCache::forget('user_count'); // Clear all cacheCache::flush();
Remember pattern
The remember method retrieves a value from cache or executes a closure to compute and store it:
$users = Cache::remember('active_users', 3600, function () { return User::where('status', 'active') ->with('profile') ->get();});
This pattern is perfect for expensive database queries. The first time it runs, it executes the query and caches the result. Subsequent calls within the TTL period return the cached data instantly.
For data that rarely changes, use rememberForever:
$settings = Cache::rememberForever('app_settings', function () { return Setting::all()->pluck('value', 'key')->toArray();});
Increment/decrement
MongoDB cache supports atomic numeric operations:
*// Initialize a counter*Cache::put('page_views', 0, 3600); *// Increment by 1*Cache::increment('page_views'); *// Increment by a specific amount*Cache::increment('page_views', 10); *// Decrement*Cache::decrement('api_calls_remaining');Cache::decrement('api_calls_remaining', 5); *// Works with floats too*Cache::increment('total_revenue', 99.99);
These operations are atomic, meaning they're safe to use in concurrent environments where multiple processes might update the same counter simultaneously.
Real-world example: Product catalog
Let's implement caching in a ProductController to speed up product catalog queries:
<?php namespace App\Http\Controllers; use App\Models\Product;use Illuminate\Support\Facades\Cache; class ProductController extends Controller{ public function index() { // Cache product catalog for 30 minutes $products = Cache::remember('products:catalog', 1800, function () { return Product::with(['category', 'images']) ->where('status', 'active') ->orderBy('featured', 'desc') ->orderBy('created_at', 'desc') ->limit(100) ->get(); }); return view('products.index', compact('products')); } public function show(string $slug) { // Cache individual product for 1 hour $product = Cache::remember("product:{$slug}", 3600, function () use ($slug) { return Product::with(['category', 'images', 'variants', 'reviews']) ->where('slug', $slug) ->firstOrFail(); }); return view('products.show', compact('product')); } public function update(Request $request, string $id) { $product = Product::findOrFail($id); // Update product... $product->update($request->validated()); // Invalidate cache Cache::forget("product:{$product->slug}"); Cache::forget('products:catalog'); return redirect()->back()->with('success', 'Product updated successfully'); }}
This controller caches both the product list and individual product pages. When a product is updated, we invalidate the related cache entries to ensure users see fresh data.
Understanding cache limitations with MongoDB
While MongoDB provides excellent caching capabilities, it's important to understand one limitation: Cache tags are not supported with the MongoDB cache driver.
Laravel's cache tags feature works with in-memory drivers like Redis and Memcached, but not with database-backed drivers like MongoDB, file, or DynamoDB. This is because tags require specific data structures that in-memory stores provide.
If you need cache tagging functionality, you have two options:
Option 1: Use key prefixes to organize related cache entries:
// Organize with key prefixes instead of tagsCache::put('products:featured:list', $products, 3600);Cache::put('products:category:electronics', $electronics, 3600);Cache::put('products:category:clothing', $clothing, 3600); // Invalidate by tracking related keysCache::forget('products:featured:list');Cache::forget('products:category:electronics'); // Or maintain a registry of related keys$productKeys = ['products:featured:list', 'products:category:electronics'];foreach ($productKeys as $key) { Cache::forget($key);}
Option 2: Use Redis for tagging alongside MongoDB:
// config/cache.php 'stores' => [ 'mongodb' => [ 'driver' => 'mongodb', 'connection' => 'mongodb', 'collection' => 'cache', ], 'redis' => [ 'driver' => 'redis', 'connection' => 'cache', ],],// Use MongoDB for general cachingCache::store('mongodb')->put('user_profile', $profile, 3600); // Use Redis when you need tagsCache::store('redis')->tags(['products', 'featured']) ->put('featured_products', $products, 3600);
For most applications, the key prefix approach works well and keeps your caching infrastructure simple.
Distributed locks
When multiple processes might update the same cached data simultaneously, use locks to prevent race conditions:
use Illuminate\Support\Facades\Cache; public function processOrder(string $orderId){ // Try to acquire lock for 10 seconds $lock = Cache::lock("order:processing:{$orderId}", 10); if ($lock->get()) { try { // Process the order... $this->processPayment($orderId); $this->updateInventory($orderId); } finally { $lock->release(); } } else { // Couldn't acquire lock, another process is handling it throw new \Exception('Order is already being processed'); }}
For blocking behavior, use the block method:
// Wait up to 5 seconds for the lock$lock = Cache::lock('inventory:update', 10); $lock->block(5, function () { // This code runs once the lock is acquired // Lock is automatically released after closure completes $this->updateInventory();});
MongoDB stores locks in the cache_locks collection with automatic expiration using TTL indexes, ensuring locks don't persist if a process crashes.
MongoDB vs Redis for caching
Choosing between MongoDB and Redis depends on your application's specific needs:
| Factor | MongoDB cache | Redis |
|---|---|---|
| Infrastructure | Use existing MongoDB instance | Requires separate Redis server |
| — | — | — |
| Performance | Excellent for most use cases | Slightly faster (sub-millisecond) |
| — | — | — |
| Scalability | Better horizontal scaling | Limited by available RAM |
| — | — | — |
| Data structures | Documents (flexible JSON-like) | Specialized (lists, sets, hashes) |
| — | — | — |
| Query flexibility | Full MongoDB query language | Key-value lookups only |
| — | — | — |
| Persistence | Full durability by default | Configurable, can lose data |
| — | — | — |
| Memory management | Disk-based with in-memory cache | Fully in-memory |
| — | — | — |
Choose MongoDB caching when:
- You already use MongoDB as your primary database.
- You want to simplify infrastructure (no additional cache server).
- Your cached data benefits from document structure.
- Horizontal scalability is important.
- You need reliable persistence of cached data.
Choose Redis caching when:
- You need consistent sub-millisecond latency.
- Your application uses advanced data structures (sorted sets, pub/sub).
- You're building real-time features (leaderboards, sessions).
- Raw speed is the primary requirement.
Many applications use both: MongoDB for general caching and Redis for latency-critical operations.
Best practices
Follow these guidelines to optimize your MongoDB cache implementation:
Index performance considerations
For optimal cache lookup performance, create an index on the key field in your cache collection:
// Add to your TTL index migration or create a separate migration$collection = DB::connection('mongodb') ->getCollection('cache'); $collection->createIndex(['key' => 1]);
This index significantly improves cache retrieval speed, especially as your cache collection grows. The createTTLIndex() method automatically creates the TTL index on expires_at, but you should create the key index separately for better lookup performance.
Cache key naming
Use descriptive, hierarchical keys:
/ Good: Descriptive and organized$key = "user:{$userId}:profile";$key = "products:category:{$categoryId}:page:{$page}";$key = "stats:daily:{$date}"; // Avoid: Generic or collision-prone$key = "data";$key = "user";
Appropriate TTL values
Match TTL to how often data changes:
// Frequently changing: short TTLCache::put('stock_price', $price, 30);// 30 seconds // Occasionally changing: medium TTLCache::put('product_list', $products, 1800); // 30 minutes // Rarely changing: long TTLCache::put('categories', $categories, 86400); // 24 hours // Static configuration: indefiniteCache::forever('feature_flags', $flags);
Cache warming
Pre-populate cache during deployments or off-peak hours to avoid the "cold cache" problem:
<?php namespace App\Console\Commands;use Illuminate\Console\Command;use Illuminate\Support\Facades\Cache;use App\Models\Product; class WarmCache extends Command{ protected $signature = 'cache:warm'; protected $description = 'Warm up application cache'; public function handle() { $this->info('Warming cache...'); // Pre-cache frequently accessed data Cache::put('products:featured', Product::featured()->limit(20)->get(), 3600 ); Cache::put('products:catalog', Product::active()->with('category')->limit(100)->get(), 1800 ); $this->info('Cache warmed successfully!'); }}
Run this command during deployments:
php artisan cache:warm
Avoiding cache stampedes
When many users request the same expired cache key simultaneously, they all trigger the expensive operation. Add random jitter to prevent this:
$baseTtl = 3600; // 1 hour$jitter = random_int(-300, 300); // ±5 minutes Cache::put('key', $value, $baseTtl + $jitter);
Cache invalidation
Cache invalidation is challenging, but these strategies help:
Model events for automatic invalidation
Use Laravel's model events to automatically clear cache when data changes:
<?php namespace App\Models;use Illuminate\Database\Eloquent\Model;use Illuminate\Support\Facades\Cache; class Product extends Model{ protected static function booted(): void { static::saved(function (Product $product) { // Invalidate specific product cache Cache::forget("product:{$product->slug}"); // Invalidate catalog cache Cache::forget('products:catalog'); // Invalidate category-specific caches Cache::forget("products:category:{$product->category_id}"); }); static::deleted(function (Product $product) { Cache::forget("product:{$product->slug}"); Cache::forget('products:catalog'); }); }}
Now, whenever a product is created, updated, or deleted, the related caches are automatically invalidated.
Time-based expiration
For data that changes predictably, rely on TTL:
// Exchange rates update hourly$rates = Cache::remember('exchange_rates', 3600, function () { return ExchangeRateService::fetchRates();}); // Daily statistics$stats = Cache::remember("stats:daily:" . now()->format('Y-m-d'), 86400, function () { return Analytics::getDailyStats();});
Event-driven cache clearing
For complex invalidation logic, use Laravel events:
// In EventServiceProviderEvent::listen(OrderPlaced::class, function (OrderPlaced $event) { Cache::forget("user:{$event->order->user_id}:orders"); Cache::forget('orders:recent');});
Cache events and monitoring
Laravel fires events when cache operations occur. You can listen to these events for monitoring and debugging.
Using cache events
Register listeners in your EventServiceProvider:
use Illuminate\Cache\Events\CacheHit;use Illuminate\Cache\Events\CacheMissed;use Illuminate\Cache\Events\KeyWritten; Event::listen(CacheHit::class, function (CacheHit $event) { Log::debug('Cache hit', ['key' => $event->key]);}); Event::listen(CacheMissed::class, function (CacheMissed $event) { Log::debug('Cache miss', ['key' => $event->key]);}); Event::listen(KeyWritten::class, function (KeyWritten $event) { Log::debug('Cache written', ['key' => $event->key, 'ttl' => $event->seconds]);});
Using Laravel Telescope for cache debugging
Laravel Telescope provides detailed cache operation insights. Install it:
composer require laravel/telescope –devphp artisan telescope:installphp artisan migrate
Note: Telescope stores its data in a relational database (MySQL, PostgreSQL, or SQLite), separate from your MongoDB cache. This is normal—Telescope needs a SQL database for its own data storage.
Telescope's cache watcher records data when a cache key is hit, missed, updated, or forgotten. Access the dashboard at /telescope to see:
- Cache hit/miss events.
- Keys being accessed.
- Time spent on cache operations.
- Cache store being used.
Telescope is perfect for local development but too detailed for production.
Using Laravel Pulse for cache metrics
For production monitoring, use Laravel Pulse. Install it:
composer require laravel/pulsephp artisan pulse:installphp artisan migrate
Important: Pulse requires a MySQL, MariaDB, or PostgreSQL database for storing its metrics. If your application only uses MongoDB, you'll need a separate SQL database for Pulse data. This is a small trade-off for the powerful monitoring capabilities it provides.
Access Pulse at /pulse to see real-time cache metrics:
- Cache hit/miss rates
- Most accessed cache keys
- Cache performance trends
The cache card tracks cache key usage along with hits and misses, giving you at-a-glance insights into cache effectiveness.
Tracking cache hit/miss ratios
Your cache hit ratio should be above 80% for good performance. Calculate it:
Cache Hit Ratio = (Total Hits) / (Total Hits \+ Total Misses) × 100
Monitor this metric to ensure your caching strategy is effective. If it's below 80%, consider:
- Increasing TTL values.
- Caching more aggressively.
- Reviewing which data gets cached.
- Implementing cache warming.
Performance benchmarks
With proper MongoDB caching:
- Response times can improve significantly by reducing database query overhead.
- Database load can be substantially reduced through effective caching strategies.
- Cache hit rates should target 80%+ for frequently accessed data.
Cache warming for production
Cache warming pre-populates your cache during deployment, avoiding the "cold cache" problem that causes slow initial requests.
Why warm cache during deployment
After deploying new code or restarting your application, the cache is empty. Every request triggers expensive database queries until the cache populates naturally. This creates a poor user experience right when you're most likely to have users checking your updates.
Creating a cache warming command
We already created a basic warming command. Here's a more comprehensive version:
<?php namespace App\Console\Commands;use Illuminate\Console\Command;use Illuminate\Support\Facades\Cache;use App\Models\{Product, Category, Setting}; class WarmCache extends Command{ protected $signature = 'cache:warm'; protected $description = 'Warm up application cache with frequently accessed data'; public function handle(): int { $this->info('Starting cache warming...'); $bar = $this->output->createProgressBar(5); // Warm product catalog $this->warmProductCatalog(); $bar->advance(); // Warm categories $this->warmCategories(); $bar->advance(); // Warm site settings $this->warmSettings(); $bar->advance(); // Warm featured content $this->warmFeaturedContent(); $bar->advance(); // Warm navigation $this->warmNavigation(); $bar->advance(); $bar->finish(); $this->newLine(); $this->info('Cache warming completed successfully!'); return Command::SUCCESS; } protected function warmProductCatalog(): void { Cache::put('products:catalog', Product::with('category')->active()->limit(100)->get(), 1800 ); } protected function warmCategories(): void { Cache::put('categories:all', Category::with('children')->get(), 86400 ); } protected function warmSettings(): void { Cache::forever('settings:app', Setting::all()->pluck('value', 'key') ); } protected function warmFeaturedContent(): void { Cache::put('products:featured', Product::featured()->limit(10)->get(), 3600 ); } protected function warmNavigation(): void { Cache::put('navigation:main', Category::whereNull('parent_id')->with('children')->get(), 86400 ); }}
Artisan command for preheating cache
Run this during deployment:
# In your deployment scriptphp artisan cache:warm
Scheduled cache warming strategies
For applications with predictable traffic patterns, schedule cache warming:
// In app/Console/Kernel.phpprotected function schedule(Schedule $schedule): void{ // Warm cache every morning before peak hours $schedule->command('cache:warm') ->dailyAt('06:00') ->onOneServer(); // Or warm every hour during business hours $schedule->command('cache:warm') ->hourly() ->between('08:00', '18:00') ->onOneServer();}
Deployment pipeline integration
Integrate cache warming into your deployment process:
#!/bin/bash# deploy.sh# Pull latest codegit pull origin main# Install dependenciescomposer install –no-dev –optimize-autoloader# Run migrationsphp artisan migrate –force# Cache configurationphp artisan config:cachephp artisan route:cachephp artisan view:cache# Warm application cachephp artisan cache:warm# Restart queue workersphp artisan queue:restartecho "Deployment complete!"
The php artisan optimize command combines config, route, view, and event caching in a single command:
php artisan optimizephp artisan cache:warm
Troubleshooting common issues
Here are solutions to common MongoDB cache problems:
MongoDB connection problems
Problem: Cache operations fail with connection errors.
Solution: Verify your MongoDB connection settings:
// Test connectionphp artisan tinker >>> use Illuminate\Support\Facades\DB;>>> DB::connection('mongodb')->getDatabaseName();
Check that your .env file has correct MongoDB credentials and the MongoDB PHP extension is loaded.
Cache lock storage errors
Problem: Error message: "E11000 duplicate key error collection: cache_locks"
Solution: This occurs because MongoDB uses unique indexes on lock keys to prevent duplicate lock acquisition. This is expected behavior, not an error that needs fixing—it's precisely how MongoDB ensures only one process can hold a lock at a time.
When multiple processes attempt to acquire the same lock simultaneously, MongoDB's unique index constraint prevents duplicates, causing the database to throw a MongoDB\Driver\Exception\BulkWriteException instead of Laravel's expected QueryException. This is the database's way of maintaining lock integrity.
Ensure your mongodb/laravel-mongodb package is version 5.2.0 or higher, which properly handles this exception and converts it into Laravel's expected lock behavior. The package catches the exception and returns false for lock->get(), allowing your application to handle the lock contention gracefully.
TTL index not working
Problem: Expired cache entries aren't being deleted.
Solution: Verify the TTL index was created:
// MongoDB shell db.cache.getIndexes()// Should show an index on expires_at field
If missing, run the migration again:
php artisan migrate:fresh
MongoDB's background thread runs every 60 seconds, so there may be a delay of up to 60 seconds before expired documents are deleted.
Serialization issues
Problem: You get an error when caching Eloquent models or complex objects.
Solution: MongoDB cache serializes data using PHP's serialization. If you get serialization errors:
// Instead of caching the model directlyCache::put('product', $product, 3600); // ❌* // Cache only the data you needCache::put('product', $product\->toArray(), 3600); // ✅ // Or use specific attributesCache::put('product', $product\->only(['id', 'name', 'price']), 3600); // ✅
Memory usage concerns
Problem: MongoDB cache collection grows too large.
Solution: Ensure TTL indexes are configured properly. Monitor collection size:
// MongoDB shell db.cache.stats()
For immediate cleanup, manually delete expired entries:
// In a command or controller$mongodb = Cache::store('mongodb')->getStore()->getCollection();$mongodb->deleteMany(['expires_at' => ['$lt' => now()->timestamp]]);
Multi-environment configuration
Different environments need different caching strategies.
Different cache strategies for dev/staging/prod
Development: Use file or array cache for simplicity:
// config/cache.php 'default' => env('CACHE_STORE', 'file'),
.env.development:
CACHE_STORE=file
Staging: Use MongoDB cache to match production:
.env.staging:
CACHE_STORE=mongodbMONGODB_URI=mongodb://staging-server:27017MONGODB_DATABASE=laravel_staging
Production: Use MongoDB with optimized settings:
.env.production:
CACHE_STORE=mongodbMONGODB_URI=mongodb+srv://user:pass@production-cluster.mongodb.netMONGODB_DATABASE=laravel_production
Environment-specific TTL values
Adjust TTL based on environment:
// In a service or repository$ttl = match(app()->environment()) { 'local' => 60, // 1 minute in development 'staging' => 300, // 5 minutes in staging 'production' => 3600, // 1 hour in production default => 300,};Cache::put('key', $value, $ttl);
Or use config values:
// config/cache.php 'ttl' => ['short' => env('CACHE_TTL_SHORT', 300),'medium' => env('CACHE_TTL_MEDIUM', 1800),'long' => env('CACHE_TTL_LONG', 3600),],
Then, in your code:
Cache::put('key', $value, config('cache.ttl.medium'));
Local development with file cache
For local development, file cache is simpler:
.env.local:
CACHE_STORE=file
This avoids needing MongoDB running locally. Switch to MongoDB cache when testing cache-specific features.
Production with MongoDB cache
In production, MongoDB cache benefits from replica sets for high availability:
MONGODB_URI=mongodb://replica1:27017,replica2:27017,replica3:27017/?replicaSet=rs0MONGODB_DATABASE=laravelCACHE_STORE=mongodb
Testing the implementation
Let's verify your MongoDB cache is working correctly.
Create test route
Add a test route in routes/web.php:
use Illuminate\Support\Facades\Cache; Route::get('/cache-test', function () { $startTime = microtime(true); // First request will be slow (cache miss) $data = Cache::remember('test:data', 300, function () { sleep(2); // Simulate slow query return [ 'message' => 'This data was cached!', 'timestamp' => now()->toDateTimeString(), 'random' => rand(1000, 9999), ]; }); $endTime = microtime(true); $duration = round(($endTime - $startTime) * 1000, 2); return response()->json([ 'cached_data' => $data, 'duration_ms' => $duration, 'cache_hit' => $duration < 100, // If < 100ms, likely from cache ]);});
Verify cached data
Visit /cache-test in your browser twice:
First request (cache miss):
{ "cached_data": { "message": "This data was cached!", "timestamp": "2025-01-10 14:32:15", "random": 7856 }, "duration_ms": 2003.45, "cache_hit": false}
Second request (cache hit):
{ "cached_data": { "message": "This data was cached!", "timestamp": "2025-01-10 14:32:15", "random": 7856 }, "duration_ms": 12.34, "cache_hit": true}
Notice the timestamp and random number are identical on the second request, and response time dropped from ~2 seconds to ~12 milliseconds.
Check MongoDB Atlas dashboard
If you're using MongoDB Atlas:
- Log into MongoDB Atlas.
- Navigate to your cluster.
- Click "Collections."
- Find the cache collection.
- You'll see a document like:
{ "_id": "…", "key": "laravel_cache:test:data", "value": "…", // Serialized data "expires_at": 1704901935}
Check collection size
Monitor how much space your cache uses:
Route::get('/cache-stats', function () { $mongodb = Cache::store('mongodb')->getStore()->getCollection(); return response()->json([ 'count' => mongodb->aggregate([ ['$collStats' => ['storageStats' => []]] ])->toArray()[0]->storageStats->size / 1024 / 1024, 2), ]);});
Measure performance improvements
Add timing to your controllers:
public function index(){ $start = microtime(true); $products = Cache::remember('products:catalog', 1800, function () { return Product::with(['category', 'images']) ->where('status', 'active') ->limit(100) ->get(); }); $duration = (microtime(true) - $start) * 1000; Log::info('Product catalog loaded', [ 'duration_ms' => round($duration, 2), 'from_cache' => $duration < 50, ]); return view('products.index', compact('products'));}
Check your logs to see performance improvements:
[2025-01-10 14:35:12] local.INFO: Product catalog loaded {"duration_ms":385.42,"from_cache":false}[2025-01-10 14:35:18] local.INFO: Product catalog loaded {"duration_ms":8.73,"from_cache":true}
Cache hit rate verification
Track cache effectiveness:
// In a middleware or service providerEvent::listen(CacheHit::class, **fn**() => Cache::increment('cache_stats:hits'));Event::listen(CacheMissed::class, **fn**() => Cache::increment('cache_stats:misses')); // Check statsRoute::get('/cache-hit-rate', function () { $hits = Cache::get('cache_stats:hits', 0); $misses = Cache::get('cache_stats:misses', 0); $total = $hits + $misses; $hitRate = $total > 0 ? ($hits / $total) * 100 : 0; return response()->json([ 'hits' => $hits, 'misses' => hitRate, 2), 'status' => $hitRate >= 80 ? 'good' : 'needs_improvement', ]);});
Aim for a hit rate above 80% for good cache effectiveness.
Conclusion
MongoDB caching offers a practical solution for Laravel applications already using MongoDB as their primary database. By using the official mongodb/laravel-mongodb package, you get native cache driver support with automatic TTL index management for expired entries.
The key advantage is infrastructure simplicity. Instead of adding Redis or Memcached to your stack, you leverage your existing MongoDB instance. This means fewer services to manage, monitor, and secure while still achieving excellent caching performance.
We covered everything from basic cache operations to advanced topics like distributed locks, cache warming, and production monitoring with Laravel Pulse. The real-world ProductController example showed how to implement caching in your own applications, while the troubleshooting section helps you solve common issues.
MongoDB caching makes sense when you want to reduce infrastructure complexity without sacrificing performance. It's especially suitable for applications where MongoDB is already the primary database, and you need reliable persistent caching with flexible document structures.
Start by caching your most expensive database queries. Monitor your cache hit rates with Laravel Pulse, and adjust TTL values based on how often your data changes. Remember to implement proper cache invalidation using model events, and warm your cache during deployments for the best user experience.
With proper implementation, you can achieve significant performance improvements in response times and substantially reduce database load. These gains come from leveraging a database you're already using, simplifying your infrastructure while boosting application performance.