Home Blog Page 53

Building Mini-Applications Using Databases: A Complete Guide for Python Developers

0
python course
python course

Table of Contents

  • Introduction
  • Why Build Mini-Applications?
  • Setting Up the Environment
  • Project 1: Simple To-Do List App (SQLite)
    • Project Structure
    • Database Schema
    • CRUD Operations
    • Sample Code
  • Project 2: Contact Manager App (PostgreSQL)
    • Project Structure
    • Database Schema
    • Key Functionalities
    • Sample Code
  • Project 3: Notes API (Flask + MongoDB)
    • Project Structure
    • API Endpoints
    • MongoDB Collections
    • Sample Code
  • Best Practices for Database-Driven Applications
  • Conclusion

Introduction

Building mini-applications using databases is an excellent way to reinforce your knowledge of Python, SQL, and database management systems.
These projects simulate real-world application development, giving you practical experience with designing schemas, handling connections, performing CRUD operations, and structuring maintainable codebases.

In this module, you will learn how to build database-driven mini-projects step-by-step, covering everything from schema design to application logic implementation.


Why Build Mini-Applications?

Here are several reasons why working on mini-projects is crucial:

  • Solidifies Core Concepts: Reinforces learning through application.
  • Portfolio Building: Mini-projects make excellent portfolio pieces for job applications.
  • Real-World Simulation: Get hands-on experience with scenarios like user management, data validation, transactions, and error handling.
  • Database Best Practices: Understand normalization, indexing, and query optimization.

Setting Up the Environment

Make sure you have the following installed:

  • Python 3.10+
  • pip (Python package installer)
  • SQLite3 (built-in with Python)
  • PostgreSQL (and psycopg2 package)
  • MongoDB (local or cloud instance like Atlas)
  • Flask (for building simple APIs)

Install necessary packages:

pip install sqlalchemy psycopg2-binary pymongo flask

Project 1: Simple To-Do List App (SQLite)

Project Structure

todo_app/
├── database.py
├── models.py
├── app.py
└── requirements.txt

Database Schema

We will have a single table called tasks:

ColumnTypeAttributes
idINTEGERPRIMARY KEY, AUTOINCREMENT
descriptionTEXTNOT NULL
completedBOOLEANDEFAULT False

CRUD Operations

We will perform:

  • Create: Add a new task
  • Read: View all tasks
  • Update: Mark task as complete
  • Delete: Remove a task

Sample Code

database.py

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

engine = create_engine('sqlite:///tasks.db')
Session = sessionmaker(bind=engine)
session = Session()

models.py

from sqlalchemy import Column, Integer, String, Boolean
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Task(Base):
__tablename__ = 'tasks'

id = Column(Integer, primary_key=True)
description = Column(String, nullable=False)
completed = Column(Boolean, default=False)

app.py

from database import session
from models import Base, Task
from sqlalchemy import create_engine

engine = create_engine('sqlite:///tasks.db')
Base.metadata.create_all(engine)

def add_task(description):
task = Task(description=description)
session.add(task)
session.commit()

def list_tasks():
tasks = session.query(Task).all()
for task in tasks:
print(f"{task.id}: {task.description} [{'Done' if task.completed else 'Pending'}]")

def mark_done(task_id):
task = session.query(Task).filter_by(id=task_id).first()
if task:
task.completed = True
session.commit()

def delete_task(task_id):
task = session.query(Task).filter_by(id=task_id).first()
if task:
session.delete(task)
session.commit()

Project 2: Contact Manager App (PostgreSQL)

Project Structure

contact_manager/
├── database.py
├── models.py
├── app.py
└── requirements.txt

Database Schema

ColumnTypeAttributes
idSERIALPRIMARY KEY
nameTEXTNOT NULL
phoneTEXTUNIQUE
emailTEXTUNIQUE

Key Functionalities

  • Add a new contact
  • Search contacts by name
  • Update contact information
  • Delete a contact

Sample Code

database.py

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker

DATABASE_URL = 'postgresql+psycopg2://username:password@localhost/contactdb'
engine = create_engine(DATABASE_URL)
Session = sessionmaker(bind=engine)
session = Session()

models.py

from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()

class Contact(Base):
__tablename__ = 'contacts'

id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
phone = Column(String, unique=True)
email = Column(String, unique=True)

app.py

from database import session
from models import Base, Contact

Base.metadata.create_all()

def add_contact(name, phone, email):
contact = Contact(name=name, phone=phone, email=email)
session.add(contact)
session.commit()

def find_contact(name):
return session.query(Contact).filter(Contact.name.ilike(f"%{name}%")).all()

def update_contact(contact_id, phone=None, email=None):
contact = session.query(Contact).filter_by(id=contact_id).first()
if phone:
contact.phone = phone
if email:
contact.email = email
session.commit()

def delete_contact(contact_id):
contact = session.query(Contact).filter_by(id=contact_id).first()
if contact:
session.delete(contact)
session.commit()

Project 3: Notes API (Flask + MongoDB)

Project Structure

notes_api/
├── app.py
├── requirements.txt

API Endpoints

  • POST /notes: Create a new note
  • GET /notes: List all notes
  • GET /notes/<id>: Get a specific note
  • PUT /notes/<id>: Update a note
  • DELETE /notes/<id>: Delete a note

MongoDB Collections

Collection: notes

Each document:

{
"_id": "ObjectId",
"title": "Meeting Notes",
"content": "Discuss project roadmap",
"timestamp": "ISO8601 DateTime"
}

Sample Code

app.py

from flask import Flask, request, jsonify
from pymongo import MongoClient
from bson.objectid import ObjectId
from datetime import datetime

app = Flask(__name__)
client = MongoClient('mongodb://localhost:27017/')
db = client['notesdb']
notes_collection = db['notes']

@app.route('/notes', methods=['POST'])
def create_note():
data = request.json
data['timestamp'] = datetime.utcnow()
result = notes_collection.insert_one(data)
return jsonify({"id": str(result.inserted_id)}), 201

@app.route('/notes', methods=['GET'])
def get_notes():
notes = list(notes_collection.find())
for note in notes:
note['_id'] = str(note['_id'])
return jsonify(notes)

@app.route('/notes/<id>', methods=['GET'])
def get_note(id):
note = notes_collection.find_one({'_id': ObjectId(id)})
if note:
note['_id'] = str(note['_id'])
return jsonify(note)
return jsonify({'error': 'Note not found'}), 404

@app.route('/notes/<id>', methods=['PUT'])
def update_note(id):
data = request.json
result = notes_collection.update_one({'_id': ObjectId(id)}, {'$set': data})
if result.matched_count:
return jsonify({'success': True})
return jsonify({'error': 'Note not found'}), 404

@app.route('/notes/<id>', methods=['DELETE'])
def delete_note(id):
result = notes_collection.delete_one({'_id': ObjectId(id)})
if result.deleted_count:
return jsonify({'success': True})
return jsonify({'error': 'Note not found'}), 404

if __name__ == '__main__':
app.run(debug=True)

Best Practices for Database-Driven Applications

  • Always validate and sanitize user input.
  • Implement proper error handling for database failures.
  • Close database connections properly.
  • Use environment variables for sensitive data (like database credentials).
  • Create indexes on frequently queried fields to improve performance.
  • Use migrations tools like Alembic for schema changes.
  • Always backup databases in production environments.

Conclusion

Building mini-applications using databases is one of the most effective ways to master backend development concepts in Python.
By creating To-Do lists, contact managers, and simple APIs, you not only strengthen your Python skills but also become comfortable with real-world database interactions.

Today in History – 27 April

0
today in history 27 april

4977 B.C.

On this very day in 4977 B.C.,  according to German mathematician and astronomer Johannes Kepler, considered a founder of modern science,the universe was created. Kepler is best known for his theories explaining the motion of planets.

1773

The British Parliament passed the Tea Act, a bill designed to save the faltering East India Company from bankruptcy by greatly lowering the tea tax it paid to the British government and, thus, granting it a de facto monopoly on the American tea trade.

1526

Babur Shah was declared as the Emperor of Dehli.

1606

Jahangir arrested Khusroo.

1705

Aurangzeb captured the Fort of Vagingera.

1748

Muhammad Shah, Mughal emperor, passed away and was succeeded by his son Ahmad Shah.

1854

First telegram was sent between Mumbai and Pune.

1857

Jamshedji Framji Madan ( J. F. Madan ), founder of India’s first Cinema Hall – ‘Elphinstone Palace’ – at Calcutta and Parsi Theatrical Company, was born.

1878

Calcutta University started Women’s Education, and allowed them to appear in the Entrance Examination.

1906

China, as suzerain of Tibet,  agreed to the terms of a treaty proposed by Britain.  The treaty said that foreign powers might not send representatives to Tibet, receive transportation or mining concessions, or occupy, buy or lease any territory in Tibet without British permission. The British wanted to prevent the Russians from establishing a protectorate over Tibet, which lay on India’s northern border. Lord Curzon, Viceroy of India, failed in attempts to communicate with Tibet’s ruler, the Dalai Lama. A British military expedition fought its way to the capital at Lhasa and concluded the agreement.

1918

Gandhiji attended Viceroy’s War Conference at Delhi and addressed it in Hindustani; subsequently toured Kaira District to raise recruits for army.

1960

Presidential orders were issued on the report of the Committee of Parliament on official Languages which included issues relating to preparation of terminology (Hindi glossaries, Hindi translation of codes and procedural literature, imparting training in Hindi to employees for propagation of Hindi).

1993

A national credit fund for women called the ‘Rashtriya Mahila Kosh’ was set up.

1994

Supreme Court struck down section 309 of the Indian Penal Code(which makes suicide a punishable offence) that  violates  Article 21.

1996

General Election (11th) of India began.

Related Articles:

Today in History – 26 April

Today in History – 25 April

Today in History – 24 April

Today in History – 23 April

Using ORM with SQLAlchemy: Complete Beginner to Advanced Guide

0
python course
python course

Table of Contents

  • Introduction
  • What is an ORM and Why Use It?
  • Introduction to SQLAlchemy
  • Installing SQLAlchemy
  • Setting Up a Database with SQLAlchemy
  • Core Concepts of SQLAlchemy ORM
    • Declarative Base
    • Mapping Classes to Tables
  • Creating Tables and Models
  • Performing CRUD Operations Using ORM
    • Inserting Records
    • Querying Records
    • Updating Records
    • Deleting Records
  • Relationships: One-to-Many, Many-to-One, Many-to-Many
  • Advanced Query Techniques
  • SQLAlchemy Sessions Explained
  • Best Practices for Using SQLAlchemy ORM
  • Conclusion

Introduction

Managing relational databases in Python traditionally involved writing raw SQL queries.
While effective, it often led to tedious, error-prone, and hard-to-maintain code.

An Object Relational Mapper (ORM) bridges the gap by allowing developers to interact with databases using Python classes and objects instead of raw SQL statements.
SQLAlchemy is one of the most powerful and flexible ORM libraries available for Python.

In this guide, we will cover everything you need to know about using SQLAlchemy’s ORM features, from basics to advanced techniques.


What is an ORM and Why Use It?

An ORM (Object Relational Mapper) allows you to:

  • Represent database tables as Python classes.
  • Represent rows as instances of classes.
  • Query and manipulate database records using Python syntax.
  • Abstract away raw SQL for most common operations.

Advantages of Using an ORM:

  • Productivity: Write less code and avoid repetitive SQL.
  • Maintainability: Code is easier to read and modify.
  • Security: Reduces the risk of SQL injection attacks.
  • Portability: Easily switch databases without major rewrites.

Introduction to SQLAlchemy

SQLAlchemy is a comprehensive Python library for database interaction.
It offers two primary layers:

  • Core: Direct SQL expression language.
  • ORM: Higher-level abstraction over Core to map tables to Python classes.

In this module, our focus is on SQLAlchemy ORM.


Installing SQLAlchemy

You can install SQLAlchemy using pip:

pip install SQLAlchemy

To install optional drivers (like for PostgreSQL, MySQL):

pip install psycopg2-binary  # for PostgreSQL
pip install pymysql # for MySQL

Setting Up a Database with SQLAlchemy

Start by creating an engine, which is the starting point for any database interaction:

from sqlalchemy import create_engine

# Create an engine for an SQLite database
engine = create_engine('sqlite:///mydatabase.db', echo=True)
  • 'sqlite:///mydatabase.db' means using a SQLite database file.
  • echo=True enables SQL query logging for debugging.

For other databases, connection strings differ, for example:

# PostgreSQL
engine = create_engine('postgresql+psycopg2://username:password@localhost/mydatabase')

# MySQL
engine = create_engine('mysql+pymysql://username:password@localhost/mydatabase')

Core Concepts of SQLAlchemy ORM

Declarative Base

The declarative base class is used to declare models that map to database tables.

from sqlalchemy.orm import declarative_base

Base = declarative_base()

Mapping Classes to Tables

Each class represents a database table.
Each attribute represents a column.

Example:

from sqlalchemy import Column, Integer, String

class User(Base):
__tablename__ = 'users'

id = Column(Integer, primary_key=True)
name = Column(String)
email = Column(String, unique=True)

Creating Tables and Models

After defining your models, create the tables:

Base.metadata.create_all(engine)

This will create all tables based on the classes inheriting from Base.


Performing CRUD Operations Using ORM

Before performing any operations, you need to create a Session:

from sqlalchemy.orm import sessionmaker

Session = sessionmaker(bind=engine)
session = Session()

Inserting Records

new_user = User(name="Alice", email="[email protected]")
session.add(new_user)
session.commit()

Adding multiple records:

users = [
User(name="Bob", email="[email protected]"),
User(name="Charlie", email="[email protected]")
]
session.add_all(users)
session.commit()

Querying Records

Query all users:

users = session.query(User).all()
for user in users:
print(user.name, user.email)

Query with filters:

user = session.query(User).filter_by(name="Alice").first()
print(user.email)

Advanced filter:

from sqlalchemy import or_

users = session.query(User).filter(or_(User.name == 'Alice', User.name == 'Bob')).all()

Updating Records

user = session.query(User).filter_by(name="Alice").first()
user.email = "[email protected]"
session.commit()

Deleting Records

user = session.query(User).filter_by(name="Charlie").first()
session.delete(user)
session.commit()

Relationships: One-to-Many, Many-to-One, Many-to-Many

SQLAlchemy handles relationships elegantly using relationship() and ForeignKey.

One-to-Many Example

from sqlalchemy import ForeignKey
from sqlalchemy.orm import relationship

class Department(Base):
__tablename__ = 'departments'

id = Column(Integer, primary_key=True)
name = Column(String)
employees = relationship("Employee", back_populates="department")

class Employee(Base):
__tablename__ = 'employees'

id = Column(Integer, primary_key=True)
name = Column(String)
department_id = Column(Integer, ForeignKey('departments.id'))
department = relationship("Department", back_populates="employees")

This allows easy navigation:

# Access employees of a department
dept = session.query(Department).first()
for emp in dept.employees:
print(emp.name)

# Access department of an employee
emp = session.query(Employee).first()
print(emp.department.name)

Advanced Query Techniques

  • Sorting:
from sqlalchemy import desc

users = session.query(User).order_by(desc(User.name)).all()
  • Pagination:
users = session.query(User).limit(10).offset(20).all()
  • Joining Tables:
session.query(Employee).join(Department).filter(Department.name == "IT").all()

SQLAlchemy Sessions Explained

  • A Session is a workspace for your objects.
  • It tracks changes and manages transactions.
  • You must always commit() changes or rollback() if an error occurs.

Typical pattern:

session = Session()
try:
# Perform operations
session.commit()
except:
session.rollback()
raise
finally:
session.close()

Best Practices for Using SQLAlchemy ORM

  • Always close sessions properly to avoid connection leaks.
  • Use session context managers or scoped sessions for multithreading.
  • Use indexes on frequently queried columns.
  • Use lazy loading cautiously with relationships to optimize performance.
  • Validate data before committing to avoid database integrity errors.

Conclusion

SQLAlchemy ORM offers a powerful and elegant way to interact with relational databases through Python classes and objects.
You have now learned:

  • How to define models
  • How to create tables
  • How to perform CRUD operations
  • How to model relationships
  • How to execute advanced queries

MongoDB Integration with PyMongo: Complete Beginner to Advanced Guide

0
python course
python course

Table of Contents

  • Introduction
  • Why MongoDB for Python Applications?
  • What is PyMongo?
  • Installing PyMongo
  • Connecting to a MongoDB Database
  • Creating Databases and Collections
  • CRUD Operations in MongoDB using PyMongo
    • Inserting Documents
    • Querying Documents
    • Updating Documents
    • Deleting Documents
  • Advanced Queries and Filtering
  • Indexing in MongoDB with PyMongo
  • Aggregation Pipelines
  • Best Practices for Working with PyMongo
  • Conclusion

Introduction

As modern applications evolve, handling unstructured or semi-structured data efficiently becomes a critical need.
MongoDB, a popular NoSQL database, offers developers the flexibility to work with dynamic schemas and massive datasets with ease.

In this module, you will learn how to integrate MongoDB with Python applications using PyMongo, a powerful and official Python driver for MongoDB.
We will cover everything from installation to complex querying and aggregation.


Why MongoDB for Python Applications?

  • Schema-less Structure: Store data in JSON-like documents without a strict schema.
  • High Performance: Designed for scalability and high throughput.
  • Ease of Use: Insert, update, and retrieve data with simple commands.
  • Scalability: Supports horizontal scaling with sharding.
  • Flexible Data Models: Perfect for rapidly changing application requirements.

MongoDB is particularly useful when dealing with big data, real-time analytics, IoT applications, and flexible content management systems.


What is PyMongo?

PyMongo is the official Python driver for MongoDB.
It provides an intuitive and powerful way to interact with MongoDB databases, collections, and documents directly from Python scripts and applications.

With PyMongo, you can:

  • Perform CRUD operations
  • Execute complex queries and aggregations
  • Create indexes
  • Manage connections
  • Handle transactions (with MongoDB 4.0+)

Installing PyMongo

You can install PyMongo using pip:

pip install pymongo

To verify installation:

python -c "import pymongo; print(pymongo.version)"

Connecting to a MongoDB Database

You can connect to a local or remote MongoDB server.
Here is how you create a connection:

from pymongo import MongoClient

# Connect to MongoDB server
client = MongoClient('mongodb://localhost:27017/')

# Access a specific database
db = client['mydatabase']

# Access a collection
collection = db['users']

Notes:

  • Default MongoDB server runs on localhost:27017.
  • If you are connecting to a cloud database like MongoDB Atlas, replace the connection string accordingly.

Creating Databases and Collections

MongoDB is flexible. You do not have to explicitly create a database or a collection.
They are created automatically when you insert the first document.

# Accessing a database and collection
db = client['company']
employees = db['employees']

CRUD Operations in MongoDB using PyMongo

Let us dive into the core operations of any database system.

Inserting Documents

Insert a single document:

employee = {
"name": "Alice",
"department": "HR",
"salary": 50000
}
employees.insert_one(employee)

Insert multiple documents:

employee_list = [
{"name": "Bob", "department": "IT", "salary": 70000},
{"name": "Charlie", "department": "Finance", "salary": 60000}
]
employees.insert_many(employee_list)

Querying Documents

Find a single document:

result = employees.find_one({"name": "Alice"})
print(result)

Find multiple documents:

for emp in employees.find({"department": "IT"}):
print(emp)

Advanced query with conditions:

for emp in employees.find({"salary": {"$gt": 60000}}):
print(emp)

Updating Documents

Update a single document:

employees.update_one(
{"name": "Alice"},
{"$set": {"salary": 55000}}
)

Update multiple documents:

employees.update_many(
{"department": "Finance"},
{"$inc": {"salary": 5000}}
)
  • $set modifies specific fields.
  • $inc increments numeric fields.

Deleting Documents

Delete a single document:

employees.delete_one({"name": "Charlie"})

Delete multiple documents:

employees.delete_many({"department": "HR"})

Advanced Queries and Filtering

You can use logical operators like $and, $or, $not, $nor for complex filtering:

for emp in employees.find({
"$or": [
{"department": "IT"},
{"salary": {"$gt": 60000}}
]
}):
print(emp)

You can also project specific fields:

for emp in employees.find({}, {"name": 1, "salary": 1, "_id": 0}):
print(emp)

Indexing in MongoDB with PyMongo

Indexes improve query performance.

Create an index:

employees.create_index([("name", 1)])  # 1 for ascending order

Create a unique index:

employees.create_index([("email", 1)], unique=True)

List indexes:

print(employees.index_information())

Aggregation Pipelines

Aggregation provides powerful ways to transform and analyze data.

Simple aggregation example:

pipeline = [
{"$match": {"department": "IT"}},
{"$group": {"_id": "$department", "averageSalary": {"$avg": "$salary"}}}
]

for result in employees.aggregate(pipeline):
print(result)

Stages like $match, $group, $sort, $project, $limit, and $skip can be combined for complex analytics.


Best Practices for Working with PyMongo

  • Connection Pooling: PyMongo manages connection pooling automatically.
  • Error Handling: Always use try-except blocks for database operations.
  • Use ObjectIds Properly: MongoDB uses _id field which is an ObjectId by default.
  • Secure Your Database: Never expose MongoDB without authentication on public networks.
  • Paginate Queries: Use .skip() and .limit() to paginate large datasets.

Example of basic pagination:

page_size = 10
page_number = 2
skip = page_size * (page_number - 1)

for emp in employees.find().skip(skip).limit(page_size):
print(emp)

Conclusion

By integrating MongoDB with Python using PyMongo, developers can build fast, scalable, and flexible applications that can handle complex and dynamic data models.

In this module, you have learned how to:

  • Connect to MongoDB
  • Perform CRUD operations
  • Use advanced queries and indexing
  • Implement aggregation pipelines
  • Follow best practices for database interaction

PostgreSQL Integration with psycopg2 in Python: A Complete Guide

0
python course
python course

Table of Contents

  • Introduction
  • Why PostgreSQL for Python Applications?
  • What is psycopg2?
  • Installing psycopg2
  • Connecting to a PostgreSQL Database
  • Creating Tables in PostgreSQL with psycopg2
  • Performing CRUD Operations
    • Insert Data
    • Read Data
    • Update Data
    • Delete Data
  • Using Transactions and Rollbacks
  • Best Practices When Working with psycopg2
  • Conclusion

Introduction

When developing modern Python applications, using a robust and scalable database becomes critical.
PostgreSQL is one of the most popular, open-source relational database systems used by organizations across the world for complex applications.

In this module, you will learn how to integrate PostgreSQL with Python using the psycopg2 library.
You will perform basic and advanced operations such as connecting to a database, executing SQL commands, managing transactions, and ensuring best practices for production-ready systems.


Why PostgreSQL for Python Applications?

  • Reliability and Performance: PostgreSQL offers ACID compliance and high-performance capabilities.
  • Advanced Features: Support for JSON, full-text search, and advanced indexing.
  • Scalability: Handles small apps to enterprise-scale systems.
  • Open-source and Extensible: No licensing fees and highly customizable.
  • Security Features: Offers SSL, authentication methods, and role management.

Python works seamlessly with PostgreSQL, making it a preferred combination for developers building data-driven applications.


What is psycopg2?

psycopg2 is the most widely used PostgreSQL adapter for Python. It allows your Python applications to:

  • Establish connections to a PostgreSQL database.
  • Execute SQL commands like INSERT, SELECT, UPDATE, and DELETE.
  • Manage transactions.
  • Handle errors efficiently.

It is known for its efficiency, thread safety, and compliance with DB-API 2.0 standards.


Installing psycopg2

You can install psycopg2 using pip:

pip install psycopg2

Alternatively, for easier installation (no need for PostgreSQL client libraries), you can use the binary distribution:

pip install psycopg2-binary

Note: psycopg2-binary is perfect for development, but for production, it is recommended to use psycopg2 and compile from source for better control.


Connecting to a PostgreSQL Database

You need a PostgreSQL server running locally or remotely.
Here is how you can connect to it:

import psycopg2

try:
conn = psycopg2.connect(
dbname="your_database_name",
user="your_username",
password="your_password",
host="localhost",
port="5432"
)
cursor = conn.cursor()
print("Database connection successful.")
except Exception as e:
print(f"An error occurred: {e}")

Key Parameters:

  • dbname: Database name you want to connect to.
  • user: Username for authentication.
  • password: Password for the user.
  • host: Hostname or IP address of the database server.
  • port: Port number (default is 5432).

Always handle exceptions to catch connection issues.


Creating Tables in PostgreSQL with psycopg2

Once connected, you can create a new table:

cursor.execute('''
CREATE TABLE IF NOT EXISTS employees (
id SERIAL PRIMARY KEY,
name VARCHAR(100),
department VARCHAR(50),
salary INTEGER
)
''')
conn.commit()
  • SERIAL: Automatically generates a sequence number (Auto-incrementing ID).
  • Always commit changes after executing data-altering commands.

Performing CRUD Operations

Now let us see how to Create, Read, Update, and Delete data.

Insert Data

cursor.execute('''
INSERT INTO employees (name, department, salary)
VALUES (%s, %s, %s)
''', ('John Doe', 'IT', 70000))
conn.commit()
  • %s placeholders prevent SQL injection.
  • Always use parameterized queries.

Read Data

cursor.execute('SELECT * FROM employees')
records = cursor.fetchall()

for row in records:
print(row)
  • fetchall(): Retrieves all rows from the last executed statement.
  • fetchone(): Fetches one row.

Update Data

cursor.execute('''
UPDATE employees
SET salary = %s
WHERE name = %s
''', (75000, 'John Doe'))
conn.commit()
  • Update fields conditionally based on WHERE clause.

Delete Data

cursor.execute('''
DELETE FROM employees
WHERE name = %s
''', ('John Doe',))
conn.commit()
  • Use caution with delete queries; always specify conditions to prevent mass deletions.

Using Transactions and Rollbacks

psycopg2 automatically starts a new transaction when you connect.
You need to explicitly call commit() to save changes or rollback() to undo in case of an error.

Rollback Example

try:
cursor.execute('''
INSERT INTO employees (name, department, salary)
VALUES (%s, %s, %s)
''', ('Jane Smith', 'HR', 'wrong_type')) # Intentional error
conn.commit()
except Exception as e:
print(f"Error occurred: {e}")
conn.rollback()

Always roll back in case of errors to maintain database integrity.


Best Practices When Working with psycopg2

  1. Use Connection Pooling:
    For production environments, use libraries like psycopg2.pool to manage database connections efficiently.
  2. Close Resources:
    Always close cursors and connections.
cursor.close()
conn.close()
  1. Parameterized Queries:
    Always use placeholders %s to prevent SQL Injection.
  2. Use Context Managers:
    Cleaner and safer code using with statements.
import psycopg2

with psycopg2.connect(
dbname="your_database_name",
user="your_username",
password="your_password",
host="localhost",
port="5432"
) as conn:
with conn.cursor() as cursor:
cursor.execute('SELECT * FROM employees')
print(cursor.fetchall())
  1. Handle Exceptions Gracefully:
    Use try-except-finally blocks around critical database operations.
  2. Minimize Open Connections:
    Always open connections just before you need them and close them as soon as possible.

Conclusion

Integrating PostgreSQL with Python using psycopg2 opens the door to building powerful, data-driven applications.
You have now learned how to connect, create tables, insert, read, update, and delete data, along with handling transactions properly.

This forms a solid base for more advanced topics such as:

  • Connection pooling
  • Asynchronous database access
  • ORMs like SQLAlchemy and Django ORM

In the next module, we will dive deeper into Advanced PostgreSQL with Python including transactions, indexes, stored procedures, and performance optimization techniques.