Code review at scale is broken. Here’s how Augment Code is fixing it.

Migrating from SQL to MongoDB

Last updated on by

Migrating from SQL to MongoDB image

Many applications begin their lives built on SQL databases like PostgreSQL or MySQL. For years, they serve their purpose well, until they don't anymore. Maybe the team starts hitting scalability limits, or the rigid schema becomes a bottleneck as the product evolves faster than anticipated. Perhaps the business now deals with semi-structured data that fits awkwardly into normalized tables. Whatever the reason, more and more teams find themselves exploring MongoDB as an alternative or complement to their SQL infrastructure.

MongoDB offers a schema-flexible, document-oriented approach that better fits modern, fast-evolving data models. Unlike SQL databases that enforce structure through tables and rows, MongoDB stores data as JSON-like documents in collections, allowing each record to have its own shape. This flexibility can be liberating, but it also requires a shift in how you think about data modeling, querying, and ensuring consistency.

Migrating from SQL to MongoDB is not about replacing one database with another—it is about choosing the right database for the right use case. SQL databases excel at enforcing relationships and maintaining transactional integrity across normalized tables. MongoDB excels at handling diverse, evolving, and hierarchical data at scale. In many production systems, both coexist, each serving the workloads they handle best.

In this article, we will walk through the entire migration process, from planning and schema redesign to data transformation, query rewriting, and testing. You will learn how to analyze your existing SQL schema, design an equivalent MongoDB structure, migrate your data safely, and adapt your application logic to work with MongoDB's document model. By the end, you will have a clear roadmap for migrating Laravel applications from SQL to MongoDB while preserving data integrity and application reliability.

This article is aimed at developers and architects planning to transition existing SQL-based Laravel or PHP applications to MongoDB, whether partially or fully. You will see practical examples, common pitfalls, and strategies for testing and validating your migration before going live.

Understanding the migration landscape

Migrating from SQL to MongoDB involves several fundamental technical differences that shape how you design and query your data. Understanding these differences up front helps you make informed decisions about what to migrate and how to structure your new schema.

Storage model: Tables vs collections

In SQL, data lives in tables with predefined columns and strict types enforced at the database level. Every row must conform to the same structure. In MongoDB, data lives in collections where each document can have different fields and structures. This flexibility allows your schema to evolve without ALTER TABLE statements or downtime, but it requires discipline and schema validation to maintain consistency.

Relationships: Foreign keys vs embedding and references

SQL manages relationships through foreign keys and joins. When you need related data, you join tables together at query time. MongoDB offers two approaches: embedding related data directly inside a document, or using references to link documents across collections with ObjectIds.

The key decision is based on access patterns. If you always retrieve data together (like an order and its line items), embedding eliminates the need for joins and improves read performance. If data is large, unbounded, or accessed independently (like users and their thousands of orders), references are more appropriate.

Concurrency control: Pessimistic vs optimistic locking

Both MongoDB and SQL databases provide full ACID transactions, but they use different concurrency control models. PostgreSQL and MySQL use pessimistic locking, where transactions acquire locks on rows or tables, blocking concurrent access. MongoDB uses optimistic concurrency control, which allows concurrent access and detects conflicts at commit time.

Neither approach is inherently better—they have different trade-offs. Pessimistic locking prevents conflicts but can reduce concurrency. Optimistic locking maximizes concurrency but may require retries when conflicts occur. In MongoDB, good schema design that embeds related data in single documents often eliminates the need for multi-document transactions entirely, since single-document operations are always atomic.

Query language: SQL vs JSON

SQL uses declarative text-based queries with keywords like SELECT, JOIN, and WHERE. MongoDB uses a JSON-based query API and aggregation pipelines, where queries are expressed as structured objects. While Laravel's Eloquent ORM abstracts much of this difference, understanding the underlying execution model is important for performance tuning and debugging.

Deciding what to migrate

Not every part of your application needs to move to MongoDB. The decision should be driven by your data access patterns:

  • Good candidates for MongoDB: Data with varying structures (product catalogs with different attributes), hierarchical or nested data that's always accessed together (orders with line items), semi-structured data (user activity logs, reviews), and data that needs rapid schema evolution

  • Works well in either database: Most CRUD operations, data requiring ACID transactions (both databases support this equally well), and data with complex relationships (both databases can handle this, just with different modeling patterns)

Many teams run both databases side by side, choosing each based on which access patterns best match the database's strengths.

Analyzing and preparing the existing SQL schema

Before you can migrate data, you need to understand what you are migrating. This step involves analyzing your current SQL schema, identifying relationships, constraints, and access patterns. The better you understand your existing structure, the smoother the transition to MongoDB will be.

Start by identifying all the entities in your database. In SQL terms, these are your tables. Look at how they relate to one another. Are they connected through one-to-one, one-to-many, or many-to-many relationships? Understanding these relationships is critical because they will influence how you model data in MongoDB.

If your database has dozens of tables with complex relationships, don't worry—you don't have to migrate everything at once. Start with one or two collections that would benefit most from MongoDB's flexibility. The rest can follow later, or stay in SQL permanently.

For example, consider a simple e-commerce schema with three tables: users, orders, and order_items. The users table stores customer information. The orders table stores orders placed by users, with a foreign key pointing to users. The order_items table stores line items for each order, with foreign keys pointing to both orders and products.

CREATE TABLE users (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(255) UNIQUE NOT NULL
);
 
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
user_id INT REFERENCES users(id),
total DECIMAL(10,2),
status VARCHAR(50),
created_at TIMESTAMP DEFAULT NOW()
);
 
CREATE TABLE order_items (
id SERIAL PRIMARY KEY,
order_id INT REFERENCES orders(id),
product_id INT REFERENCES products(id),
quantity INT,
price DECIMAL(10,2)
);

This is a normalized design, which is standard in SQL. Each table serves a specific purpose, and relationships are enforced through foreign keys. When querying, you would use joins to combine data from multiple tables.

The next step is to identify denormalization opportunities. In MongoDB, denormalization is often encouraged because it improves read performance by reducing the need for joins. Ask yourself: which pieces of data are almost always accessed together? These are good candidates for embedding.

In the example above, order_items are almost always retrieved alongside their parent order. Rarely would you query order items independently. This suggests that in MongoDB, order items could be embedded directly within the order document.

Here is how you might map this relationship:

  • users: Separate collection. Users are accessed independently and referenced from other documents.
  • orders: Separate collection. Each order document embeds its line items as an array.
  • order_items: Embedded within orders. No longer a separate collection.

Mark one-to-one and one-to-many relationships carefully. One-to-one relationships (like a user and their profile) are usually good candidates for embedding. One-to-many relationships (like users and orders) can be either embedded or referenced depending on the size and access pattern. If the "many" side is unbounded or grows indefinitely, referencing is better. If it is small and bounded, embedding works well.

Many-to-many relationships are trickier. In SQL, you typically use a join table. In MongoDB, you can use arrays to represent the relationship. For example, if a user can have multiple roles and a role can belong to multiple users, you might store an array of role IDs in the user document.

Exporting your schema is also helpful. You can use database dump tools or ORM introspection to generate a complete picture of your current schema. In Laravel, you can inspect migrations or use Artisan commands to list tables and columns. This documentation will serve as a reference as you design the new MongoDB schema.

Finally, create a migration checklist. This should include:

  • All tables and their purposes.
  • Relationships between tables.
  • Constraints like unique indexes, foreign keys, and default values.
  • Access patterns (which queries are most common).
  • Data volume and growth trends.

This checklist ensures that no data or relationships are lost during the migration. It also helps you identify edge cases and potential pitfalls early in the process.

Designing the new MongoDB schema

Schema design in MongoDB is fundamentally different from SQL. In SQL, you design around normalization and relationships. In MongoDB, you design around access patterns—how your application reads and writes data. This shift in mindset is critical to building performant, maintainable MongoDB schemas.

The core principle is this: Model your data based on how you query it, not on how it is structured theoretically. If your application frequently retrieves a user along with their profile, address, and preferences, then embedding that data in a single document makes sense. If orders are queried independently of users, then keeping them in separate collections with references is better.

Let's revisit the e-commerce example and design a MongoDB schema. We identified three main entities: users, orders, and order items. Here is how we might map them:

Users collection

Each user document contains their core profile information. If users have a single address, it can be embedded. If they have multiple addresses, those can be stored as an array of embedded documents.

{
"_id": ObjectId("652f..."),
"name": "Alice",
"email": "alice@example.com",
"addresses": [
{ "city": "Paris", "country": "France", "type": "home" },
{ "city": "London", "country": "UK", "type": "work" }
],
"created_at": ISODate("2025-01-15T10:00:00Z")
}

Orders collection

Each order document references the user who placed it and embeds the line items. This design allows you to fetch the entire order, including all its items, with a single query.

{
"_id": ObjectId("652e..."),
"user_id": ObjectId("652f..."),
"status": "completed",
"total": 250.00,
"items": [
{ "product_id": ObjectId("650a..."), "quantity": 2, "price": 50.00 },
{ "product_id": ObjectId("650b..."), "quantity": 1, "price": 150.00 }
],
"created_at": ISODate("2025-03-10T14:30:00Z")
}

This structure eliminates the need for a separate order_items collection. The items are always retrieved with the order, so embedding them prevents the cost of a join.

Mapping strategies

Here are some general rules for mapping SQL relationships to MongoDB structures:

  • One-to-one: Embed the related data in the same document. For example, a user and their profile can be a single document.
  • One-to-many (bounded): Embed if the "many" side is small and predictable. For example, embed addresses in the user document.
  • One-to-many (unbounded): Use references if the "many" side can grow indefinitely. For example, a user can have thousands of orders, so orders should be a separate collection.
  • Many-to-many: Use arrays of references. For example, a user can have multiple roles, and a role can belong to multiple users. Store an array of role IDs in the user document.

Schema validation

MongoDB supports JSON Schema validation, which allows you to enforce structure and constraints at the database level. While the schema is flexible, validation rules ensure that critical fields are always present and have the correct types.

For example, you can require that every order document has a user_id, status, and total field:

db.createCollection("orders", {
validator: {
$jsonSchema: {
bsonType: "object",
required: ["user_id", "status", "total"],
properties: {
user_id: { bsonType: "objectId" },
status: { enum: ["pending", "completed", "canceled"] },
total: { bsonType: "decimal" }
}
}
}
});

This validation prevents accidental insertion of malformed documents and helps maintain consistency across your application.

Laravel integration

In Laravel, MongoDB models extend from MongoDB\Laravel\Eloquent\Model. You can define relationships using methods like embedsMany() or referencesOne().

For example, an Order model might look like this:

use MongoDB\Laravel\Eloquent\Model;
 
class Order extends Model
{
protected $connection = 'mongodb';
protected $collection = 'orders';
protected $fillable = ['user_id', 'status', 'total', 'items'];
 
public function user()
{
return $this->belongsTo(User::class, 'user_id');
}
}

And a User model:

use MongoDB\Laravel\Eloquent\Model;
 
class User extends Model
{
protected $connection = 'mongodb';
protected $collection = 'users';
protected $fillable = ['name', 'email', 'addresses'];
 
public function orders()
{
return $this->hasMany(Order::class, 'user_id');
}
}

This keeps the Laravel syntax familiar while leveraging MongoDB's document model beneath.

Designing a MongoDB schema requires balancing flexibility with structure. You want the freedom to evolve your data model over time, but you also need enough consistency to avoid chaos. Start with a clear understanding of your access patterns, map your SQL relationships thoughtfully, and use schema validation where necessary to enforce critical constraints.

Migrating data

Once your MongoDB schema is designed, the next step is actually moving the data. This involves exporting data from your SQL database, transforming it to fit the new structure, importing it into MongoDB, and verifying that everything transferred correctly.

The basic workflow looks like this:

  1. Export data from SQL
  2. Transform data to match the MongoDB schema
  3. Import data into MongoDB
  4. Verify accuracy and completeness

Let's walk through each step with practical examples.

Exporting data from SQL

Most SQL databases provide export tools. PostgreSQL has pg_dump and psql, MySQL has mysqldump, and SQLite has .dump. You can export individual tables to CSV, JSON, or SQL dump files.

For example, to export the users table from PostgreSQL to JSON:

psql -d your_database -c "COPY (SELECT row_to_json(users) FROM users) TO STDOUT" > users.json

This outputs each row as a JSON object, one per line. The format is easy to parse and transform.

For relational data like orders and order items, you might need to export them separately and then merge them during transformation.

Transforming data

Data transformation is where you reshape SQL rows into MongoDB documents. This often involves:

  • Renaming fields to match MongoDB conventions (for example, id becomes _id).
  • Converting foreign keys into ObjectIds.
  • Embedding related records into arrays.
  • Flattening or nesting structures.

For the orders example, you would:

  1. Export orders and order_items separately.
  2. Group order items by order_id.
  3. Embed the grouped items as an array within each order document.

Here is a simple PHP script to transform orders:

$orders = json_decode(file_get_contents('orders.json'), true);
$items = json_decode(file_get_contents('order_items.json'), true);
 
// Group items by order_id
$itemsByOrder = [];
foreach ($items as $item) {
$itemsByOrder[$item['order_id']][] = [
'product_id' => new MongoDB\BSON\ObjectId($item['product_id']),
'quantity' => $item['quantity'],
'price' => $item['price']
];
}
 
// Embed items into orders
$transformed = [];
foreach ($orders as $order) {
$transformed[] = [
'_id' => new MongoDB\BSON\ObjectId(),
'user_id' => new MongoDB\BSON\ObjectId($order['user_id']),
'status' => $order['status'],
'total' => (float) $order['total'],
'items' => $itemsByOrder[$order['id']] ?? [],
'created_at' => new MongoDB\BSON\UTCDateTime(strtotime($order['created_at']) * 1000\)
];
}
 
file_put_contents('orders_transformed.json', json_encode($transformed));

This script reads the exported data, groups line items by order, and outputs a transformed JSON file ready for import.

Importing data into MongoDB

MongoDB provides the mongoimport command-line tool for bulk imports. For JSON files with one document per line (NDJSON format), use:

mongoimport --uri="mongodb+srv://user:pass@cluster.mongodb.net" --db=laravel_mongo --collection=orders --file=orders_transformed.json --jsonArray

This imports all documents from the JSON file into the orders collection.

For larger datasets, consider using Laravel's batch insert methods to insert documents programmatically. Here is an example:

use MongoDB\Laravel\Eloquent\Model;
 
$orders = json_decode(file_get_contents('orders_transformed.json'), true);
 
foreach ($orders as $order) {
Order::create($order);
}

Batch inserts are more efficient for large volumes:

DB::connection('mongodb')->collection('orders')->insert($orders);

Verifying the migration

After importing, verify that the data transferred correctly. Check:

  • Record counts: Do the totals match between SQL and MongoDB?
  • Sample data: Spot-check random records to ensure fields are correct.
  • Field mappings: Confirm that foreign keys became ObjectIds and embedded arrays contain the right data.

For example, count users in both databases:

# PostgreSQL
psql -d your_database -c "SELECT COUNT(*) FROM users;"
 
# MongoDB
mongosh "mongodb+srv://cluster.mongodb.net/laravel_mongo" --eval "db.users.countDocuments()"

If the counts match, that is a good sign. Next, inspect a few documents:

db.users.findOne({ email: "alice@example.com" })

Compare the output to the corresponding SQL row. Verify that nested structures like addresses are correctly embedded.

Recommended tools

Several tools can simplify the migration process:

  • Studio 3T: A GUI tool with import/export wizards and data transformation features
  • Mongify: A Ruby-based migration tool that can automate SQL-to-MongoDB transformations
  • Custom scripts: For complex transformations, writing your own scripts in PHP, Python, or JavaScript often gives you the most control

Testing on staging first

Never run migrations directly on production. Always test in a staging environment that mirrors production as closely as possible. This lets you catch issues like missing indexes, incorrect data types, or slow queries before they affect real users.

Once you have validated the migration on staging, you can apply the same process to production with confidence. Consider scheduling the migration during low-traffic periods to minimize impact.

Data migration is one of the riskiest parts of moving from SQL to MongoDB, but with careful planning, transformation, and verification, it can be done safely and reliably.

Rewriting queries and application logic

Migrating the database schema and data is only half the battle. The other half is updating your application code to work with MongoDB instead of SQL. This involves rewriting queries, adapting Eloquent relationships, and ensuring that the application logic still behaves correctly.

The first step is identifying all the SQL queries in your application. This includes raw SQL statements, query builder calls, and Eloquent methods. In Laravel, most database interactions happen through Eloquent or the query builder, so the changes are often localized to your models and controllers.

Let's look at how common SQL operations translate to MongoDB.

SELECT queries

In SQL, you retrieve data with SELECT statements:

SELECT * FROM users WHERE status = 'active';

In MongoDB, the equivalent is the find() method:

db.users.find({ status: "active" })

In Laravel, using Eloquent:

// SQL
$users = User::where('status', 'active')->get();
 
// MongoDB (same syntax)
$users = User::where('status', 'active')->get();

Thanks to Laravel's abstraction, the syntax is identical. The difference is in the underlying execution.

JOIN queries

Joins are where things diverge more significantly. In SQL, you combine tables using JOIN:

SELECT users.name, orders.total
FROM users
JOIN orders ON users.id = orders.user_id
WHERE users.status = 'active';

In MongoDB, if data is embedded, there is no need for a join at all. For example, if orders are embedded in the user document, you simply query users:

db.users.find({ status: "active" }, { name: 1, orders: 1 })

If orders are in a separate collection, you use the $lookup stage in the aggregation pipeline:

db.users.aggregate([
{ $match: { status: "active" } },
{
$lookup: {
from: "orders",
localField: "_id",
foreignField: "user_id",
as: "orders"
}
},
{ $project: { name: 1, "orders.total": 1 } }
])

In Laravel, you can use Eloquent relationships to achieve similar results:

$users = User::where('status', 'active')->with('orders')->get();

Behind the scenes, Laravel will either retrieve embedded documents or perform a $lookup depending on how you have defined the relationship.

GROUP BY and aggregation

SQL's GROUP BY is powerful for aggregating data:

SELECT user_id, COUNT(*) AS order_count
FROM orders
GROUP BY user_id
ORDER BY order_count DESC;

In MongoDB, you use the aggregation pipeline:

db.orders.aggregate([
{ $group: { _id: "$user_id", order_count: { $sum: 1 } } },
{ $sort: { order_count: -1 } }
])

In Laravel:

$results = DB::connection('mongodb')
->collection('orders')
->raw(function ($collection) {
return $collection->aggregate([
['$group' => ['_id' => '$user_id', 'order_count' => ['$sum' => 1]]],
['$sort' => ['order_count' => -1]]
]);
});

MongoDB's aggregation framework is extremely powerful and can replace many complex SQL queries involving joins, subqueries, and window functions.

UPDATE and DELETE

Updates and deletes work similarly in both systems. In SQL:

UPDATE users SET status = 'inactive' WHERE last_login < '2024-01-01';
DELETE FROM users WHERE status = 'inactive';

In MongoDB:

db.users.updateMany({ last_login: { $lt: new Date("2024-01-01") } }, { $set: { status: "inactive" } })
db.users.deleteMany({ status: "inactive" })

In Laravel:

User::where('last_login', '<', '2024-01-01')->update(['status' => 'inactive']);
User::where('status', 'inactive')->delete();

Again, the Laravel syntax remains consistent across both databases.

Testing output consistency

As you rewrite queries, it is critical to validate that the output matches the original SQL behavior. Write tests that compare results before and after migration. For example:

public function test_active_users_query()
{
// Insert test data
User::create(['name' => 'Alice', 'status' => 'active']);
User::create(['name' => 'Bob', 'status' => 'inactive']);
 
// Query
$users = User::where('status', 'active')->get();
 
// Assert
$this->assertCount(1, $users);
$this->assertEquals('Alice', $users->first()->name);
}

Run these tests against both the SQL and MongoDB versions of your application to ensure consistency.

Handling edge cases

Be mindful of edge cases like:

  • Null values: MongoDB stores nulls explicitly, while SQL might have default values.
  • Empty arrays: In MongoDB, an empty embedded array is different from a missing field.
  • Type coercion: MongoDB stores numbers as integers or decimals, while SQL might convert them during queries.

Document these differences and adjust your application logic accordingly.

Rewriting queries is often the most time-consuming part of a migration, but it is also where you gain the most insight into how your application uses data. Take the opportunity to optimize queries, remove unnecessary joins, and simplify complex logic by leveraging MongoDB's document model.

Handling relationships and transactions

One of the biggest differences between SQL and MongoDB is how they handle relationships and transactions. SQL databases rely on normalized tables connected by foreign keys, with ACID transactions that span multiple tables. MongoDB, by contrast, encourages embedding related data and provides multi-document transactions as an option rather than a default.

SQL relationships

In SQL, relationships are explicit and enforced at the database level. Foreign keys ensure referential integrity, meaning you cannot insert an order without a valid user ID. Cascading deletes ensure that when a user is deleted, all their related orders are also removed.

CREATE TABLE orders (
id SERIAL PRIMARY KEY,
user_id INT REFERENCES users(id) ON DELETE CASCADE,
total DECIMAL(10,2)
);

This design makes relationships clear and consistent, but it also requires joins to retrieve related data.

MongoDB relationships

MongoDB replaces foreign keys with two patterns: embedding and referencing.

Embedding stores related data inside the parent document. For example, an order and its line items can be a single document:

{
"_id": ObjectId("652e..."),
"user_id": ObjectId("652f..."),
"status": "completed",
"items": [
{ "product_id": ObjectId("650a..."), "quantity": 2, "price": 50.00 }
]
}

This eliminates the need for joins and makes reads fast. However, it is only practical when the embedded data is bounded and always accessed together.

Referencing links documents across collections using ObjectIds, similar to foreign keys:

{
"_id": ObjectId("652e..."),
"user_id": ObjectId("652f..."),
"product_ids": [ObjectId("650a..."), ObjectId("650b...")]
}

To retrieve the referenced data, you use the $lookup aggregation stage or multiple queries in your application.

In Laravel, you define relationships using Eloquent methods:

// Embedding
class Order extends Model
{
protected $connection = 'mongodb';
 
public function items()
{
return $this->embedsMany(OrderItem::class);
}
}
 
// Referencing
class Order extends Model
{
protected $connection = 'mongodb';
 
public function user()
{
return $this->belongsTo(User::class, 'user_id');
}
}

Transactions in SQL

In PostgreSQL or MySQL, transactions are implicit and ubiquitous. Every operation is automatically wrapped in a transaction, and you can group multiple statements together:

BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;

If any statement fails, the entire transaction rolls back, ensuring consistency.

Transactions in MongoDB

MongoDB introduced multi-document ACID transactions in version 4.0. Before that, only single-document operations were atomic. This means that updating a single document, even if it contains nested arrays, is always all-or-nothing. However, updating multiple documents requires an explicit transaction.

Here is an example of a MongoDB transaction in Laravel:

use MongoDB\Laravel\Connection;
 
DB::connection('mongodb')->transaction(function (Connection $connection) {
$connection->collection('accounts')
->where('_id', new MongoDB\BSON\ObjectId($fromId))
->decrement('balance', 100);
 
$connection->collection('accounts')
->where('_id', new MongoDB\BSON\ObjectId($toId))
->increment('balance', 100);
});

Laravel's MongoDB package handles the session automatically when you use the transaction() method. All operations within the closure are part of the same transaction. If any operation fails, MongoDB rolls back all changes.

When to use transactions

In SQL, transactions are the default. In MongoDB, they should be used sparingly because they add overhead and require coordination across replica set members. Favor single-document atomic operations whenever possible.

For example, instead of using a transaction to update a user and their profile, embed the profile in the user document:

{
"_id": ObjectId("652f..."),
"name": "Alice",
"profile": {
"bio": "Developer",
"avatar": "https://..."
}
}

Now, updating the profile is a single atomic operation with no transaction needed:

User::where('_id', $userId)->update(['profile.bio' => 'Senior Developer']);

Use transactions when:

  • Updating multiple independent documents that must remain consistent.
  • Performing financial operations like transfers or payments.
  • Coordinating changes across collections where embedding is not practical.

Common pitfalls

  • Long-running transactions: MongoDB transactions have a maximum lifetime (60 seconds by default), so keep them short.
  • Overusing transactions: If you find yourself needing transactions frequently, consider redesigning your schema to embed related data.
  • Not testing transaction rollback: Always test that your transactions properly roll back on errors.

Handling relationships and transactions in MongoDB requires a different mindset than SQL. Embrace embedding where it makes sense, use references for large or independent datasets, and reserve transactions for cases where atomicity across documents is truly necessary.

Testing and validation

Testing is critical during any migration. You need confidence that the data transferred correctly, that queries return the same results, and that application behavior has not changed. This section covers strategies for validating your migration at every level.

Test data accuracy

The first step is verifying that all data made it from SQL to MongoDB without loss or corruption. Start with simple counts:

# PostgreSQL
psql -d your_database -c "SELECT COUNT(*) FROM users;"
 
# MongoDB
mongosh "mongodb+srv://cluster.mongodb.net/laravel_mongo" --eval "db.users.countDocuments()"

If the counts match, that is a good start. Next, spot-check individual records by querying specific users or orders in both databases and comparing the results.

For embedded documents, ensure that nested arrays contain the expected number of elements. For example, if a user had five addresses in SQL, their addresses array in MongoDB should also have five entries.

Random sampling

Manually checking every record is impractical for large datasets. Instead, use random sampling to verify a representative subset. For example:

// Fetch 100 random users from SQL
$sqlUsers = DB::connection('mysql')
->table('users')
->inRandomOrder()
->limit(100)
->get();
 
// Fetch the same users from MongoDB
foreach ($sqlUsers as $sqlUser) {
$mongoUser = User::where('_id', new MongoDB\BSON\ObjectId($sqlUser->id))->first();
 
// Compare fields
$this->assertEquals($sqlUser->name, $mongoUser->name);
$this->assertEquals($sqlUser->email, $mongoUser->email);
}

This approach scales to large datasets and catches edge cases that full scans might miss.

Test application behavior

Beyond raw data, you need to verify that your application behaves the same way after migration. This means running your existing test suite against the MongoDB-backed version of the application.

Laravel's feature tests are perfect for this. They simulate HTTP requests and assert on responses, ensuring that controllers, routes, and views work correctly.

For example, if you have an API endpoint that returns active users:

public function test_active_users_endpoint()
{
// Seed test data
User::create(['name' => 'Alice', 'status' => 'active']);
User::create(['name' => 'Bob', 'status' => 'inactive']);
 
// Make request
$response = $this->getJson('/api/users?status=active');
 
// Assert
$response->assertStatus(200);
$response->assertJsonCount(1, 'data');
$response->assertJsonFragment(['name' => 'Alice']);
}

Run this test before and after migration. If it passes in both cases, you have successfully preserved application behavior.

Automate verification

For large migrations, write scripts that automatically compare SQL and MongoDB datasets. These scripts can:

  • Count records in each collection and table.
  • Compare field values for a sample of records.
  • Check for missing or extra documents.
  • Validate that relationships (embedded or referenced) are intact.

Here is a simple Laravel command that verifies user data:

Artisan::command('verify-users', function () {
$sqlCount = DB::connection('mysql')->table('users')->count();
$mongoCount = DB::connection('mongodb')->collection('users')->count();
 
$this->info("SQL users: $sqlCount");
$this->info("MongoDB users: $mongoCount");
 
if ($sqlCount === $mongoCount) {
$this->info('Counts match!');
} else {
$this->error('Counts do not match!');
}
});

Run this command as part of your deployment process to catch issues early.

Performance validation

Beyond correctness, you need to ensure that MongoDB performs at least as well as SQL, if not better. Compare query latency and throughput before and after migration.

Use Laravel's query logging to measure execution times:

DB::connection('mongodb')->enableQueryLog();
 
$users = User::where('status', 'active')->get();
 
$queries = DB::connection('mongodb')->getQueryLog();
dump($queries);

Look for slow queries and optimize them with indexes or by restructuring the schema.

Monitoring

Once the migration is live, monitor your MongoDB instance closely. Use tools like MongoDB Compass or Atlas monitoring to track:

  • Query performance.
  • Index usage.
  • Disk and memory consumption.
  • Replication lag.

Set up alerts for anomalies like slow queries, high memory usage, or failed connections. This helps you catch issues before they impact users.

Testing and validation are not one-time tasks. They are ongoing processes that ensure your migration is successful and your application remains reliable.

Optimizing for performance

Migrating to MongoDB opens up new opportunities for performance optimization. MongoDB's flexible schema, powerful indexing, and horizontal scaling capabilities allow you to fine-tune performance in ways that are difficult or impossible in SQL.

Indexing

Indexes are the most important factor in MongoDB performance. Without indexes, queries must scan every document in a collection, which becomes prohibitively slow as data grows.

MongoDB supports several types of indexes:

  • Single-field indexes: Speed up queries on one field
  • Compound indexes: Cover multiple fields, useful for queries with combined filters
  • Text indexes: Enable full-text search on string fields
  • Geospatial indexes: Support location-based queries

Create indexes on fields that you query frequently:

Schema::connection('mongodb')->table('users', function ($collection) {
$collection->index('email'); // Single-field index
});
 
Schema::connection('mongodb')->table('orders', function ($collection) {
$collection->index(['user_id' => 1, 'created_at' => -1]); // Compound index
});

Here, 1 means ascending and -1 means descending.

Use the explain() method to verify that queries are using indexes:

$explain = User::where('email', 'alice@example.com')->explain();

Look for IXSCAN (index scan) in the output. If you see COLLSCAN (collection scan), the query is not using an index and needs optimization.

Caching

For expensive queries, especially aggregation pipelines that run frequently, caching can dramatically reduce database load. Laravel makes this easy with the Cache facade:

$stats = Cache::remember('dashboard:stats', now()->addMinutes(5), function () {
return DB::connection('mongodb')
->collection('orders')
->raw(function ($collection) {
return $collection->aggregate([
['$group' => ['_id' => null, 'total' => ['$sum' => '$total']]],
]);
});
});

This caches the result for five minutes, reducing the number of database queries.

Sharding

For very large datasets, MongoDB supports sharding, which distributes data across multiple servers. This allows horizontal scaling, where adding more servers increases capacity and throughput.

Sharding is a complex topic and should only be introduced when your dataset outgrows a single server. For most applications, a well-indexed replica set is sufficient.

If you do need sharding, choose your shard key carefully. A good shard key:

  • Distributes data evenly across shards.
  • Aligns with your query patterns to avoid cross-shard queries.
  • Has high cardinality (many unique values).

For example, user_id might be a good shard key for an orders collection if orders are frequently queried by user.

Query optimization

Even with indexes, some queries can be slow. Use MongoDB's aggregation pipeline to push as much work as possible to the database rather than processing data in PHP.

For example, instead of fetching all orders and filtering in PHP:

// Slow
$orders = Order::all();
$activeOrders = $orders->filter(fn($order) => $order->status === 'active');

Filter in the database:

// Fast
$activeOrders = Order::where('status', 'active')->get();

For complex transformations, use the aggregation pipeline:

$stats = DB::connection('mongodb')
->collection('orders')
->raw(function ($collection) {
return $collection->aggregate([
['$match' => ['status' => 'completed']],
['$group' => ['_id' => '$user_id', 'total_spent' => ['$sum' => '$total']]],
['$sort' => ['total_spent' => -1]],
['$limit' => 10]
]);
});

This finds the top 10 spenders without fetching all orders into PHP.

Storage tuning

MongoDB documents have a maximum size of 16 MB. If your documents approach this limit, consider:

  • Splitting large arrays into separate collections.
  • Using references instead of embedding.
  • Storing large binary data (like images) in GridFS or external storage.

Keep documents lean and focused on the data your application actually needs.

Optimizing MongoDB performance is an iterative process. Start with proper indexing, monitor query performance, and adjust your schema and queries based on real-world usage patterns.

Common pitfalls and how to avoid them

Migrating from SQL to MongoDB is complex, and even experienced developers encounter pitfalls. This section highlights the most common mistakes and how to avoid them.

Carrying over relational design directly

The most common mistake is treating MongoDB like a relational database. Developers often create collections that mirror their SQL tables exactly, with references everywhere and no embedding. This defeats the purpose of MongoDB's document model.

Instead, embrace denormalization. Embed related data that is accessed together. For example, instead of separate users and profiles collections, embed the profile in the user document.

Overusing $lookup for simple relationships

The $lookup stage is powerful, but it is also slower than embedding. If you find yourself using $lookup frequently, reconsider your schema. Can the data be embedded instead?

For example, if orders always need customer information, embed a snapshot of the customer data in the order document rather than referencing the full customer record.

Ignoring indexes

Without indexes, MongoDB queries are slow. Always create indexes on fields you query frequently, especially:

  • Fields in where() clauses.
  • Fields in orderBy() clauses.
  • Fields used in aggregation pipelines.

Use explain() to verify that indexes are being used.

Forgetting schema validation or field naming consistency

While MongoDB's schema is flexible, inconsistency can lead to bugs. One document might have a field called userId, while another has user_id. Use schema validation to enforce naming conventions and required fields.

Underestimating transaction scope

MongoDB transactions are more expensive than SQL transactions. Overusing them can hurt performance. Favor single-document atomic operations whenever possible.

For example, instead of a transaction to update a user and their profile, embed the profile and update both in one operation.

Neglecting rollback strategy

What happens if the migration fails halfway through? Always have a rollback plan. This might involve:

  • Keeping the SQL database online until the migration is fully validated.
  • Taking snapshots of MongoDB before importing data.
  • Writing scripts to reverse the migration if needed.

Checklist for rollout

Before migrating production data, ensure you have:

  • Backed up the SQL database: Full dump and point-in-time recovery
  • Tested import scripts: Run them multiple times on staging
  • Validated mappings: Confirmed that all fields and relationships are correct
  • Benchmarked performance: Ensure MongoDB queries are at least as fast as SQL
  • Prepared monitoring: Set up alerts for slow queries and errors
  • Documented the process: So the team knows what was changed and why

Following these best practices will help you avoid the most common pitfalls and ensure a smooth migration.

Conclusion

Migrating from SQL to MongoDB is more than just moving data from one database to another. It represents a fundamental shift in how you think about data modeling, querying, and consistency. SQL databases like PostgreSQL excel at enforcing relationships and maintaining transactional integrity across normalized tables. MongoDB excels at handling diverse, evolving, and hierarchical data at scale.

Throughout this article, we have covered the entire migration process, from understanding the landscape and analyzing your SQL schema to designing a new MongoDB structure, transforming and importing data, rewriting queries, and ensuring data integrity through testing and optimization. Each step requires careful planning and validation to ensure a successful transition.

The key takeaway is that MongoDB rewards developers who model data around usage patterns rather than theoretical normalization. By embedding related data where appropriate and using references for large or independent datasets, you can build schemas that are both performant and maintainable. Transactions should be used sparingly, reserved for cases where cross-document consistency is essential.

For Laravel developers, both SQL and MongoDB have their place. PostgreSQL is ideal for transactional workloads that demand strict consistency and relational integrity. MongoDB is ideal for flexible, rapidly evolving data models that need to scale horizontally. Many production systems use both, leveraging the strengths of each to handle different workloads.

As you move forward with your migration, start small and iterate. Migrate one collection or feature at a time, validate thoroughly on staging, and monitor closely in production. This incremental approach reduces risk and gives you confidence that each step is correct before moving to the next.

For further exploration, consult the official MongoDB documentation on data modeling and migrations, as well as the Laravel MongoDB package documentation. MongoDB Atlas also provides migration tools and services that can automate parts of the process for larger datasets.

Remember, migration is not an all-or-nothing decision. You can run SQL and MongoDB side by side, using each for the workloads they handle best. This hybrid approach is common in modern architectures and gives you the flexibility to evolve your data strategy over time.

Farhan Hasin Chowdhury photo

Backend engineer and technical author teaching millions through code and content.

Cube

Laravel Newsletter

Join 40k+ other developers and never miss out on new tips, tutorials, and more.

image
Laravel Cloud

Easily create and manage your servers and deploy your Laravel applications in seconds.

Visit Laravel Cloud
Curotec logo

Curotec

World class Laravel experts with GenAI dev skills. LATAM-based, embedded engineers that ship fast, communicate clearly, and elevate your product. No bloat, no BS.

Curotec
Bacancy logo

Bacancy

Supercharge your project with a seasoned Laravel developer with 4-6 years of experience for just $3200/month. Get 160 hours of dedicated expertise & a risk-free 15-day trial. Schedule a call now!

Bacancy
Tinkerwell logo

Tinkerwell

The must-have code runner for Laravel developers. Tinker with AI, autocompletion and instant feedback on local and production environments.

Tinkerwell
Get expert guidance in a few days with a Laravel code review logo

Get expert guidance in a few days with a Laravel code review

Expert code review! Get clear, practical feedback from two Laravel devs with 10+ years of experience helping teams build better apps.

Get expert guidance in a few days with a Laravel code review
PhpStorm logo

PhpStorm

The go-to PHP IDE with extensive out-of-the-box support for Laravel and its ecosystem.

PhpStorm
Laravel Cloud logo

Laravel Cloud

Easily create and manage your servers and deploy your Laravel applications in seconds.

Laravel Cloud
Kirschbaum logo

Kirschbaum

Providing innovation and stability to ensure your web application succeeds.

Kirschbaum
Shift logo

Shift

Running an old Laravel version? Instant, automated Laravel upgrades and code modernization to keep your applications fresh.

Shift
Harpoon: Next generation time tracking and invoicing logo

Harpoon: Next generation time tracking and invoicing

The next generation time-tracking and billing software that helps your agency plan and forecast a profitable future.

Harpoon: Next generation time tracking and invoicing
Lucky Media logo

Lucky Media

Get Lucky Now - the ideal choice for Laravel Development, with over a decade of experience!

Lucky Media
SaaSykit: Laravel SaaS Starter Kit logo

SaaSykit: Laravel SaaS Starter Kit

SaaSykit is a Multi-tenant Laravel SaaS Starter Kit that comes with all features required to run a modern SaaS. Payments, Beautiful Checkout, Admin Panel, User dashboard, Auth, Ready Components, Stats, Blog, Docs and more.

SaaSykit: Laravel SaaS Starter Kit

The latest

View all →
Speeding Up Laravel News With Cloudflare image

Speeding Up Laravel News With Cloudflare

Read article
Livewire 4 Support in Laravel VS Code Extension v1.4.3 image

Livewire 4 Support in Laravel VS Code Extension v1.4.3

Read article
Fair Queue Distribution with Laravel Balanced Queue image

Fair Queue Distribution with Laravel Balanced Queue

Read article
Migrating Laravel News from Laravel Forge to Cloud image

Migrating Laravel News from Laravel Forge to Cloud

Read article
Laravel News Is the Live Stream Partner for Laracon EU 2026 image

Laravel News Is the Live Stream Partner for Laracon EU 2026

Read article
Query Builder Expression Aliases in Laravel 12.48 image

Query Builder Expression Aliases in Laravel 12.48

Read article