Home Blog Page 76

CRUD Operations in Node.js with MongoDB (with Code Examples)

0
mongodb course
mongodb course

Table of Contents

  1. Introduction
  2. What is CRUD?
  3. Setting Up the Project
  4. Connecting to MongoDB
  5. Create Operation (insertOne, insertMany)
  6. Read Operation (findOne, find)
  7. Update Operation (updateOne, updateMany)
  8. Delete Operation (deleteOne, deleteMany)
  9. Error Handling & Validation
  10. Organizing Code Structure
  11. Best Practices
  12. Conclusion

1. Introduction

MongoDB and Node.js are often paired together in modern web development due to their asynchronous capabilities and JSON-friendly nature. In this module, we’ll explore how to implement CRUD operations (Create, Read, Update, Delete) using the official MongoDB Node.js driver, not an ODM like Mongoose, giving you full control and transparency.


2. What is CRUD?

CRUD is an acronym for the four basic types of database operations:

  • Create – Add new data
  • Read – Retrieve data
  • Update – Modify existing data
  • Delete – Remove data

Each of these maps directly to methods provided by the MongoDB Node.js driver.


3. Setting Up the Project

Initialize Project

mkdir mongodb-crud-node
cd mongodb-crud-node
npm init -y
npm install mongodb dotenv

Setup .env File

MONGO_URI=mongodb://127.0.0.1:27017
DB_NAME=crud_demo

Folder Structure

.
├── .env
├── db.js
├── crud.js
└── index.js

4. Connecting to MongoDB

db.js

const { MongoClient } = require('mongodb');
require('dotenv').config();

const client = new MongoClient(process.env.MONGO_URI);
let db;

async function connectDB() {
if (!db) {
await client.connect();
db = client.db(process.env.DB_NAME);
console.log('Connected to MongoDB');
}
return db;
}

module.exports = connectDB;

5. Create Operation

insertOne

const db = await connectDB();
const users = db.collection('users');

await users.insertOne({
name: 'John Doe',
email: '[email protected]',
age: 30
});

insertMany

await users.insertMany([
{ name: 'Jane Doe', age: 25 },
{ name: 'Alice', age: 22 }
]);

6. Read Operation

findOne

const user = await users.findOne({ name: 'John Doe' });
console.log(user);

find + toArray

const userList = await users.find({ age: { $gt: 20 } }).toArray();
console.log(userList);

Add filters, projection, and sorting:

const usersSorted = await users.find({}, { projection: { name: 1, _id: 0 } }).sort({ age: -1 }).toArray();

7. Update Operation

updateOne

await users.updateOne(
{ name: 'John Doe' },
{ $set: { age: 31 } }
);

updateMany

await users.updateMany(
{ age: { $lt: 30 } },
{ $inc: { age: 1 } }
);

8. Delete Operation

deleteOne

await users.deleteOne({ name: 'John Doe' });

deleteMany

await users.deleteMany({ age: { $gt: 25 } });

9. Error Handling & Validation

Wrap each database interaction in try-catch to gracefully handle errors:

try {
const user = await users.insertOne({ name: 'Test' });
} catch (err) {
console.error('MongoDB Error:', err.message);
}

Use validation logic before making DB calls:

if (!email.includes('@')) throw new Error('Invalid email address');

You can also enforce schema validation using MongoDB 4.0+ features on the collection level.


10. Organizing Code Structure

To make your CRUD application production-ready:

Structure like:

/db
└── connect.js
/controllers
└── userController.js
/routes
└── userRoutes.js
index.js

Move each CRUD function to its own module in /controllers for clean separation.


11. Best Practices

  • Always close the connection in shutdown hooks.
  • Avoid hardcoded values – use environment variables.
  • Sanitize and validate user input.
  • Use appropriate indexes for frequent queries.
  • Prefer insertMany, updateMany, etc., for batch operations.
  • Keep the logic reusable by abstracting functions into services.
  • Use proper logging (e.g., Winston, Pino) in production.
  • Set a default limit on find() queries to avoid memory overload.

12. Conclusion

Implementing CRUD with the official MongoDB Node.js driver gives you low-level control over your database interactions. While tools like Mongoose abstract a lot, using the native driver teaches you how MongoDB works under the hood — a crucial skill for optimization, performance tuning, and working in microservices environments.

MongoDB with Node.js Using the Official Driver

0
mongodb course
mongodb course

Table of Contents

  1. Introduction
  2. Why Use the Official MongoDB Driver?
  3. Installing MongoDB Driver for Node.js
  4. Connecting to MongoDB (Local and Atlas)
  5. CRUD Operations with the MongoDB Driver
  6. Using Connection Pooling
  7. Error Handling
  8. Structuring Code for Maintainability
  9. Best Practices
  10. Conclusion

1. Introduction

MongoDB is a flexible, document-based NoSQL database that pairs seamlessly with JavaScript via Node.js. While many developers use ODMs like Mongoose, there are cases—especially in performance-critical apps—where using the official MongoDB Node.js driver directly provides better control and transparency.

In this module, we’ll cover how to interact with MongoDB using the official driver, perform CRUD operations, and apply best practices for real-world projects.


2. Why Use the Official MongoDB Driver?

While ORMs/ODMs provide abstraction, the official driver gives you:

  • Full access to MongoDB’s native features
  • Better performance for low-level database control
  • Fine-tuned configuration and optimization options
  • Lightweight integration without heavy abstractions

It’s a great choice when you need flexibility or are building tools, services, or microservices that prioritize speed and custom queries.


3. Installing MongoDB Driver for Node.js

To get started, initialize a Node.js project:

npm init -y

Then install the official MongoDB driver:

npm install mongodb

4. Connecting to MongoDB (Local and Atlas)

Connecting to Localhost:

const { MongoClient } = require('mongodb');

const uri = 'mongodb://127.0.0.1:27017';
const client = new MongoClient(uri);

async function connectDB() {
try {
await client.connect();
console.log("Connected to MongoDB");
} catch (err) {
console.error("Connection failed:", err);
}
}

connectDB();

Connecting to MongoDB Atlas:

const uri = 'mongodb+srv://<username>:<password>@cluster.mongodb.net/?retryWrites=true&w=majority';

Always secure your credentials using .env files and dotenv.


5. CRUD Operations with the MongoDB Driver

1. Create (Insert):

const db = client.db('testdb');
const users = db.collection('users');

await users.insertOne({ name: "Alice", age: 25 });

2. Read (Find):

const user = await users.findOne({ name: "Alice" });
console.log(user);

You can also use .find() with .toArray():

const allUsers = await users.find({}).toArray();

3. Update:

await users.updateOne(
{ name: "Alice" },
{ $set: { age: 26 } }
);

4. Delete:

await users.deleteOne({ name: "Alice" });

6. Using Connection Pooling

MongoClient automatically manages a connection pool. You can configure it like this:

const client = new MongoClient(uri, {
maxPoolSize: 10, // Maximum connections in pool
minPoolSize: 2, // Minimum maintained connections
});

This is especially useful for performance tuning in high-traffic apps.


7. Error Handling

Always wrap async operations in try-catch blocks and handle database errors gracefully:

try {
const result = await users.insertOne({ name: "Bob" });
} catch (err) {
if (err.code === 11000) {
console.error("Duplicate key error");
} else {
console.error("Unknown error:", err);
}
}

Use proper logging and monitoring for production environments.


8. Structuring Code for Maintainability

Instead of writing all database logic in a single file, separate your concerns:

/db
└── connect.js
/models
└── userModel.js
/routes
└── userRoutes.js
index.js

Example of connect.js:

const { MongoClient } = require('mongodb');

const client = new MongoClient(process.env.MONGO_URI);
let db;

async function initDB() {
if (!db) {
await client.connect();
db = client.db('appdb');
}
return db;
}

module.exports = initDB;

This structure scales well in real-world applications.


9. Best Practices

  • Use async/await for cleaner asynchronous code.
  • Secure credentials using dotenv and .env files.
  • Always close connections when the app shuts down.
  • Index frequently queried fields for performance.
  • Avoid storing unnecessary large documents.
  • Use transactions (supported in MongoDB 4.0+) when dealing with multiple operations across collections.
  • Use Schema Validation to enforce document structure.

10. Conclusion

Using the official MongoDB driver gives you more control, better performance, and full access to MongoDB’s capabilities. It’s a great fit for lightweight services, microservices, or when you want to avoid abstraction overhead.

Geospatial Queries and Indexes in MongoDB (2dsphere, $geoNear)

0
mongodb course
mongodb course

Table of Contents

  1. Introduction
  2. Understanding Geospatial Data
  3. Types of Geospatial Indexes in MongoDB
  4. The 2dsphere Index Explained
  5. Creating a 2dsphere Index
  6. Storing GeoJSON Data
  7. Geospatial Query Operators
  8. Using $near and $geoWithin
  9. The $geoNear Aggregation Stage
  10. Real-World Use Cases
  11. Best Practices for Geospatial Queries
  12. Conclusion

1. Introduction

Modern applications—from ride-hailing platforms to delivery systems and social networking apps—often depend on location-based data. MongoDB offers robust support for geospatial data through its 2dsphere indexes and a powerful suite of geospatial query operators. This module will walk you through how to store, index, and query location data efficiently in MongoDB.


2. Understanding Geospatial Data

Geospatial data represents geographic locations such as points (latitude and longitude), lines, and polygons. In MongoDB, geospatial data is stored using either legacy coordinate pairs or GeoJSON objects.

GeoJSON is the preferred format for use with 2dsphere indexes and is designed to support Earth’s spherical geometry.

Common GeoJSON Types:

  • Point: A single location
  • LineString: A path or road
  • Polygon: An area like a park or a building outline

3. Types of Geospatial Indexes in MongoDB

MongoDB supports two main types of geospatial indexes:

  • 2d indexes: For flat (Euclidean) geometry; used for legacy applications.
  • 2dsphere indexes: For spherical geometry (Earth-like), ideal for real-world mapping applications.

In this module, we’ll focus on 2dsphere, which is the modern and most widely used option.


4. The 2dsphere Index Explained

A 2dsphere index supports queries that calculate geometries on an earth-like sphere, making it suitable for:

  • Calculating distances between points
  • Determining proximity or bounding areas
  • Filtering documents based on spatial relationships

Use Case Examples:

  • “Find restaurants within 3km of a user”
  • “Locate all warehouses within a delivery zone”

5. Creating a 2dsphere Index

First, insert a document with a GeoJSON point:

db.places.insertOne({
name: "Central Park",
location: {
type: "Point",
coordinates: [-73.9654, 40.7829] // [longitude, latitude]
}
});

Then create a 2dsphere index on the location field:

db.places.createIndex({ location: "2dsphere" });

6. Storing GeoJSON Data

MongoDB supports various GeoJSON formats:

  • Point: { type: "Point", coordinates: [longitude, latitude] }
  • LineString: { type: "LineString", coordinates: [[lng, lat], [lng, lat], ...] }
  • Polygon: Useful for defining zones and boundaries

Example of a polygon:

{
type: "Polygon",
coordinates: [[
[-73.97, 40.77],
[-73.98, 40.78],
[-73.96, 40.78],
[-73.97, 40.77]
]]
}

7. Geospatial Query Operators

MongoDB provides several operators for querying geospatial data:

  • $near: Returns documents ordered by proximity.
  • $geoWithin: Returns documents located inside a specified geometry.
  • $geoIntersects: Returns documents that intersect with a geometry.

8. Using $near and $geoWithin

$near Example:

db.places.find({
location: {
$near: {
$geometry: {
type: "Point",
coordinates: [-73.9667, 40.78]
},
$maxDistance: 2000 // meters
}
}
});

$geoWithin Example:

db.places.find({
location: {
$geoWithin: {
$geometry: {
type: "Polygon",
coordinates: [[
[-73.98, 40.76],
[-73.97, 40.79],
[-73.95, 40.78],
[-73.98, 40.76]
]]
}
}
}
});

9. The $geoNear Aggregation Stage

$geoNear is used in aggregation pipelines and requires a 2dsphere index.

db.places.aggregate([
{
$geoNear: {
near: { type: "Point", coordinates: [-73.9667, 40.78] },
distanceField: "distance",
spherical: true,
maxDistance: 3000
}
}
]);

Key Options:

  • near: The central point.
  • distanceField: Field to store the computed distance.
  • spherical: Must be true for 2dsphere.
  • maxDistance/minDistance: Filter by distance in meters.

10. Real-World Use Cases

  • Food Delivery App: Find all restaurants within a delivery radius.
  • Cab Aggregator: Match passengers to nearby drivers in real-time.
  • Retail: Suggest stores near the user.
  • Emergency Services: Dispatch the nearest ambulance or fire truck.
  • Event Planning: Recommend venues near a chosen location.

11. Best Practices for Geospatial Queries

  • Always index the geospatial field with a 2dsphere index.
  • Use GeoJSON format for compatibility and full feature support.
  • Combine geospatial queries with regular filters for more efficient searches.
  • Prefer $geoNear inside aggregation for advanced use cases like sorting and filtering.
  • Be cautious with $near on large datasets—combine with limit to reduce load.

12. Conclusion

MongoDB’s support for geospatial data is powerful and production-ready. By using 2dsphere indexes, GeoJSON formats, and operators like $geoNear and $geoWithin, you can build highly responsive location-based features in your applications.

Text Search and Text Indexes in MongoDB

0
mongodb course
mongodb course

Table of Contents

  1. Introduction
  2. What is Full-Text Search?
  3. Why Use Text Indexes in MongoDB
  4. Creating Text Indexes
  5. Performing Text Search
  6. Text Index Rules and Limitations
  7. Filtering, Sorting, and Scoring
  8. Multi-language Support in Text Indexes
  9. Text Indexes vs Regex for Search
  10. Real-World Use Cases
  11. Best Practices for Text Search
  12. Conclusion

1. Introduction

Text search is a critical capability for modern applications — whether you’re building a blog, an e-commerce site, or a document management system. MongoDB simplifies full-text search with text indexes, which allow efficient querying of string content stored in your documents.

In this module, you’ll learn everything from how to create and use text indexes, to advanced search techniques, language support, and how MongoDB scores and filters results.


2. What is Full-Text Search?

Full-text search enables querying human-readable text in a more natural way, accounting for linguistic nuances such as stemming, stop words, and relevance scoring. Unlike regular expression matching, full-text search is token-based and optimized for performance and ranking.

MongoDB supports this feature through text indexes on string fields.


3. Why Use Text Indexes in MongoDB

  • Efficient search: Text indexes tokenize and index the document content for fast lookup.
  • Natural language processing: Built-in support for stop words and stemming.
  • Relevance-based ranking: Results can be scored and sorted based on match quality.
  • Multi-language support: Different linguistic rules for over 30 languages.

4. Creating Text Indexes

You can create a text index on a single field or multiple fields:

// Create a text index on a single field
db.articles.createIndex({ content: "text" });

// Create a text index on multiple fields
db.articles.createIndex({ title: "text", content: "text" });

MongoDB only allows one text index per collection, but that index can cover multiple fields.


5. Performing Text Search

Use the $text operator in the find() query to search indexed fields:

db.articles.find({ $text: { $search: "mongodb indexing" } });

This query matches any article where the title or content contains “mongodb” or “indexing”.

Exact Phrases

To search for an exact phrase, enclose it in quotes:

db.articles.find({ $text: { $search: "\"mongodb indexing\"" } });

Excluding Terms

Use - to exclude terms:

db.articles.find({ $text: { $search: "mongodb -replica" } });

6. Text Index Rules and Limitations

  • One text index per collection.
  • Only string fields are supported.
  • Text indexes are case-insensitive and diacritic-insensitive.
  • Fields not explicitly indexed with "text" will be ignored by $text queries.

7. Filtering, Sorting, and Scoring

MongoDB assigns a relevance score to each matching document. You can access it using the textScore metadata:

db.articles.find(
{ $text: { $search: "mongodb performance" } },
{ score: { $meta: "textScore" } }
).sort({ score: { $meta: "textScore" } });

This ensures the results are sorted by how relevant they are to the search terms.

You can also combine $text with other query operators:

db.articles.find({
$text: { $search: "caching" },
views: { $gt: 1000 }
});

8. Multi-language Support in Text Indexes

Text indexes support multiple languages by applying different linguistic rules such as stemming and stop words.

When creating the index, specify a default language:

db.articles.createIndex(
{ title: "text", content: "text" },
{ default_language: "french" }
);

You can also override this per-document using the language field:

db.articles.insert({
title: "MongoDB en action",
content: "Apprenez MongoDB avec des exemples",
language: "french"
});

9. Text Indexes vs Regex for Search

FeatureText IndexesRegex
SpeedFast (indexed)Slow (no index support)
Case-insensitiveYesNeeds /i modifier
StemmingYesNo
Diacritic sensitivityNoYes
Scoring/RelevanceYesNo

Conclusion: Use text indexes for natural language queries and regex for pattern matching.


10. Real-World Use Cases

  • Blog/Search Engine: Index titles and content for quick searching.
  • E-commerce: Product names and descriptions.
  • Support Portals: Searching FAQ or documentation.
  • Messaging apps: Keyword search in chat histories.
  • CMS: Finding articles by keywords or phrases.

11. Best Practices for Text Search

  • Use compound indexes when combining $text with filters (e.g., category).
  • Avoid over-indexing: Only include relevant fields in the text index.
  • Use $text sparingly on large collections — it’s powerful but can be CPU intensive.
  • Cache frequent searches at the application level.
  • Consider Atlas Search (based on Lucene) for advanced capabilities.

12. Conclusion

Text indexes in MongoDB provide a powerful, flexible, and efficient way to implement full-text search. With the $text operator, scoring, language support, and stemming, you can create search features that are responsive and user-friendly.

Index Performance and Query Plans in MongoDB (explain())

0
mongodb course
mongodb course

Table of Contents

  1. Introduction
  2. Why Index Performance Matters
  3. The MongoDB Query Execution Process
  4. Understanding the explain() Method
  5. Output Modes of explain()
  6. Key Metrics in explain() Output
  7. Comparing Query Plans with and without Indexes
  8. Interpreting Common explain() Scenarios
  9. Index Performance Tips and Query Optimization
  10. Tools for Index Performance Monitoring
  11. Conclusion

1. Introduction

When building data-driven applications with MongoDB, indexes play a pivotal role in ensuring performance, especially for large-scale systems. But how do you know if your indexes are being used effectively? That’s where MongoDB’s explain() method comes in.

The explain() method is a powerful diagnostic tool that reveals the query execution plan — including whether an index is used, how efficiently, and what operations were performed during the query execution.


2. Why Index Performance Matters

Poorly designed or unused indexes can lead to:

  • Full collection scans (COLLSCAN)
  • High CPU and memory consumption
  • Increased query response time
  • Slower writes due to index maintenance overhead

By analyzing query plans with explain(), developers and DBAs can ensure optimal performance, particularly for high-traffic applications.


3. The MongoDB Query Execution Process

Before we dive into explain(), let’s understand what happens when you run a query:

  1. Parsing: MongoDB parses the query to understand the fields and values.
  2. Plan Selection: It selects one or more query plans using the Query Planner.
  3. Plan Evaluation: MongoDB tests a few candidate plans (if multiple exist).
  4. Execution: It picks the best one based on efficiency and executes the query.

explain() allows you to see this planning and execution process in action.


4. Understanding the explain() Method

MongoDB’s explain() can be called on any query, update, or delete operation.

db.users.find({ email: "[email protected]" }).explain()

This will return a detailed JSON document describing how the query is executed, including whether an index was used and what type of scan was performed.


5. Output Modes of explain()

There are three verbosity levels you can use:

  • “queryPlanner” (default): Shows index selection and query plan details.
  • “executionStats”: Includes actual run-time stats like documents examined, keys examined, etc.
  • “allPlansExecution”: Shows details for all considered plans, not just the winning one.

Example with mode:

db.users.find({ email: "[email protected]" }).explain("executionStats")

6. Key Metrics in explain() Output

Here are some important fields to monitor in the output:

  • winningPlan: The actual plan used by MongoDB.
  • stage: Whether it’s COLLSCAN, IXSCAN, FETCH, etc.
  • indexName: Shows which index (if any) was used.
  • nReturned: Number of documents returned.
  • keysExamined: Number of index keys scanned.
  • docsExamined: Number of documents scanned (should be low if index is effective).
  • executionTimeMillis: Time taken to execute the query (only in executionStats).

7. Comparing Query Plans With and Without Indexes

Let’s run a query with no index:

db.customers.find({ name: "Alice" }).explain("executionStats")

This might return a plan with:

"stage": "COLLSCAN",
"docsExamined": 100000,
"nReturned": 1

Now, create an index:

db.customers.createIndex({ name: 1 })

Run the same query:

db.customers.find({ name: "Alice" }).explain("executionStats")

You’ll likely see:

"stage": "IXSCAN",
"docsExamined": 1,
"nReturned": 1

This demonstrates the dramatic impact of indexes on performance.


8. Interpreting Common explain() Scenarios

Scenario 1: Collection Scan

"stage": "COLLSCAN"

No index used. Should be optimized with appropriate indexing.

Scenario 2: Index Scan

"stage": "IXSCAN"

The query uses an index — great for performance.

Scenario 3: Covered Query

If projection is used and only index fields are returned:

"stage": "PROJECTION_COVERED"

Covered queries skip document fetch, making them extremely fast.


9. Index Performance Tips and Query Optimization

  • Always profile slow queries with explain("executionStats").
  • Minimize docsExamined and keysExamined — they indicate work done.
  • Use covered queries where possible.
  • Avoid full scans on large collections unless intentional (e.g., reports).
  • Compound Indexes should match field order used in queries.
  • Use hint() to force index usage if the optimizer doesn’t pick the right one.

Example:

db.customers.find({ name: "Alice" }).hint({ name: 1 }).explain()

10. Tools for Index Performance Monitoring

MongoDB Atlas Performance Advisor

Automatically suggests indexes based on real workload.

mongotop and mongostat

CLI tools that show read/write activity and bottlenecks.

Application Logs

Enable query profiling to log slow operations.

db.setProfilingLevel(1, { slowms: 50 }) // Log queries > 50ms

11. Conclusion

MongoDB’s explain() is an indispensable tool for anyone serious about optimizing performance. By using it consistently during development and in production monitoring, you can:

  • Ensure indexes are being used effectively
  • Minimize expensive collection scans
  • Guide decisions for index creation and query restructuring