Home Blog Page 304

Today in History – 6 June

0
today in history 6 june

today in history 6 june

1596

Sikh Guru Har Govindji, the sixth sikh guru, was born. He fashioned a military role for the Sikhs.

1683

The Ashmolean, the world’s first university museum, opened in Oxford, England. At the time of the English Restoration, Oxford was the center of scientific activity in England.

1867

Baba Kharak Singh, freedom fighter, was born at Sialkot.

1890

Gopinath Bardoloi, architect of modern Assam, freedom fighter and leader, was born.

1891

Vanktesh Iyengar Masti, famous Kannad poet, story writer, novelist, playwright and critic, was born.

1916

Lord Kitchener, the premier soldier of the British Empire, passed away tragically last night as the ”Hampshire”, the cruiser on which he was traveling to Russia to boost sagging morale, struck a mine or was torpedoed off the Orkney Islands and sank, drowning all aboard. Life in London came to a standstill, while Paris and Washington were shocked by the news. In the last half-century, through tireless energy and devotion to imperial duty, Horatio Herbert Kitchener, 66, commanded in Palestine, Cyprus, Egypt, Sudan, South Africa, and India. Two years ago, he had become the War Secretary.

1918

The first large-scale battle fought by American soldiers in World War I began in Belleau Wood, northwest of the Paris-to-Metz road.

1944

Although the term D-Day is used routinely as military lingo for the day an operation or event will take place, for many it is also synonymous with June 6, 1944, the day the Allied powers crossed the English Channel and landed on the beaches of Normandy, France, beginning the liberation of Western Europe from Nazi control during World War II. Within three months, the northern part of France would be freed and the invasion force would be preparing to enter Germany, where they would meet up with Soviet forces moving in from the east.

1947

Gandhiji wrote to Mountbatten, with Pakistan conceded, to persuade Jinnah to amicable settle all outstanding points with Congress.

1961

Central Institute of Fisheries Education, a deemed university, was established in Mumbai to impart post-graduate education and training, mainly to the in-service fisheries personnel of the country to provide trained manpower for the fisheries developmental activities.

1990

GOI decided to extend validity of passports for 10 years.

1996

Benazir Bhutto,Pakistan PM, gave green signal for opening up trade with India.

1997

New economic grouping BIST-EC (Bangladesh-India-Sri Lanka-Thailand Economic Cooperation forum) came into being.

1997

U.S. Congress honoured Mother Teresa with a gold medal.

1998

All IAF Meteorological Sections were put on high alert from 06 Jun 98, and constant interaction with India Meteorological Department (IMD) at Mumbai and the (IMD) Centre at Ahmedabad resulted in 24 hours tracking and plotting of the cyclone, which was passed down as forecasts to Service and civil authorities in the Saurashtra and Kutch region.

1999

Paes-Bhupathi bagged their maiden Grand Slam doubles crown, defeating Goran Ivanisevic and Jeff Tarango in the final.

Related Articles:

Today in History – 4 June

Today in History – 3 June

Today in History – 2 June

Today in History – 1 June

Vector Spaces and Inner Products: Foundations of Linear Structure and Geometry

0
vector spaces

Table of Contents

  1. Introduction
  2. What Is a Vector Space?
  3. Axioms of Vector Spaces
  4. Subspaces and Spanning Sets
  5. Linear Independence and Basis
  6. Dimension and Coordinate Systems
  7. Inner Product: Definition and Properties
  8. Examples of Inner Products
  9. Norms and Angles
  10. Orthogonality and Orthonormal Sets
  11. Projection of Vectors
  12. The Gram-Schmidt Process
  13. Orthogonal Complements and Decompositions
  14. Inner Product Spaces vs Euclidean Spaces
  15. Applications in Physics and Machine Learning
  16. Conclusion

1. Introduction

Vector spaces and inner products form the conceptual core of linear algebra. A vector space provides the structure to perform operations like addition and scalar multiplication. An inner product allows us to measure lengths, angles, and define geometric notions such as orthogonality.

These concepts are essential across physics, mathematics, engineering, and data science.


2. What Is a Vector Space?

A vector space over a field \( \mathbb{F} \) (usually \( \mathbb{R} \) or \( \mathbb{C} \)) is a set \( V \) equipped with two operations:

  1. Vector addition: \( \vec{u} + \vec{v} \in V \)
  2. Scalar multiplication: \( a\vec{v} \in V \)

It satisfies certain axioms such as associativity, distributivity, and the existence of a zero vector.


3. Axioms of Vector Spaces

A vector space \( V \) must satisfy:

  1. Closure under addition and scalar multiplication
  2. Associativity: \( (\vec{u} + \vec{v}) + \vec{w} = \vec{u} + (\vec{v} + \vec{w}) \)
  3. Commutativity: \( \vec{u} + \vec{v} = \vec{v} + \vec{u} \)
  4. Existence of additive identity: \( \vec{v} + \vec{0} = \vec{v} \)
  5. Existence of additive inverse: \( \vec{v} + (-\vec{v}) = \vec{0} \)
  6. Distributivity: \( a(\vec{u} + \vec{v}) = a\vec{u} + a\vec{v} \), etc.

4. Subspaces and Spanning Sets

A subspace is a subset of a vector space that is itself a vector space.

Given vectors \( \vec{v}_1, \dots, \vec{v}_n \), the span is:

\[
\text{span}\{\vec{v}1, \dots, \vec{v}_n\} = \left\{ \sum{i=1}^n a_i \vec{v}_i \mid a_i \in \mathbb{F} \right\}
\]

It is the smallest subspace containing all \( \vec{v}_i \).


5. Linear Independence and Basis

A set \( \{\vec{v}_1, \dots, \vec{v}_n\} \) is linearly independent if:

\[
a_1\vec{v}_1 + \dots + a_n\vec{v}_n = \vec{0} \Rightarrow a_i = 0 \ \forall i
\]

A basis is a minimal set of linearly independent vectors that span the space.


6. Dimension and Coordinate Systems

The number of vectors in any basis of a vector space is called its dimension.

Every vector can be uniquely expressed as a linear combination of basis vectors:

\[
\vec{v} = a_1\vec{e}_1 + a_2\vec{e}_2 + \dots + a_n\vec{e}_n
\]

The coefficients \( a_i \) are the coordinates of \( \vec{v} \).


7. Inner Product: Definition and Properties

An inner product on a real vector space \( V \) is a function:

\[
\langle \cdot, \cdot \rangle: V \times V \to \mathbb{R}
\]

That satisfies:

  1. Linearity in the first argument
  2. Symmetry: \( \langle \vec{u}, \vec{v} \rangle = \langle \vec{v}, \vec{u} \rangle \)
  3. Positive-definiteness: \( \langle \vec{v}, \vec{v} \rangle \geq 0 \) and equals zero only when \( \vec{v} = \vec{0} \)

8. Examples of Inner Products

  • Dot product in \( \mathbb{R}^n \):
    \[
    \langle \vec{u}, \vec{v} \rangle = \sum u_i v_i
    \]
  • Function inner product:
    \[
    \langle f, g \rangle = \int_a^b f(x)g(x) \, dx
    \]

9. Norms and Angles

The norm (length) of a vector is:

\[
|\vec{v}| = \sqrt{\langle \vec{v}, \vec{v} \rangle}
\]

The angle \( \theta \) between two vectors is defined via:

\[
\cos \theta = \frac{\langle \vec{u}, \vec{v} \rangle}{|\vec{u}| |\vec{v}|}
\]


10. Orthogonality and Orthonormal Sets

Two vectors are orthogonal if:

\[
\langle \vec{u}, \vec{v} \rangle = 0
\]

A set is orthonormal if:

  • All vectors have unit norm
  • All vectors are mutually orthogonal

11. Projection of Vectors

The projection of \( \vec{v} \) onto \( \vec{u} \) is:

\[
\text{proj}_{\vec{u}} \vec{v} = \frac{\langle \vec{v}, \vec{u} \rangle}{\langle \vec{u}, \vec{u} \rangle} \vec{u}
\]

Used in least squares, geometry, and physics.


12. The Gram-Schmidt Process

Transforms a linearly independent set \( \{ \vec{v}_1, \dots, \vec{v}_n \} \) into an orthonormal set \( \{ \vec{u}_1, \dots, \vec{u}_n \} \)

Algorithm:

  1. Set \( \vec{u}_1 = \vec{v}_1 / |\vec{v}_1| \)
  2. Subtract projections from subsequent vectors
  3. Normalize at each step

13. Orthogonal Complements and Decompositions

For a subspace \( W \subseteq V \), the orthogonal complement \( W^\perp \) consists of all vectors orthogonal to every vector in \( W \).

Any vector \( \vec{v} \in V \) can be decomposed as:

\[
\vec{v} = \vec{w} + \vec{w}^\perp
\]

Where \( \vec{w} \in W \), \( \vec{w}^\perp \in W^\perp \)


14. Inner Product Spaces vs Euclidean Spaces

  • Euclidean spaces: equipped with standard dot product
  • Inner product spaces: abstract vector spaces with a defined inner product (can be infinite-dimensional)

All Euclidean spaces are inner product spaces, but not vice versa.


15. Applications in Physics and Machine Learning

  • Quantum mechanics: Hilbert spaces, bras and kets
  • Mechanics: orthogonality of modes in vibration
  • ML & AI: projections, distances, similarity (e.g., cosine similarity)
  • Signal processing: Fourier series as orthonormal expansions

16. Conclusion

Vector spaces provide a linear framework for abstract reasoning, and inner products bring in geometric structure — angles, lengths, and orthogonality. Together, they underpin a vast array of theoretical and applied sciences.

From quantum physics to machine learning, mastering vector spaces and inner products is fundamental to advanced mathematical and physical reasoning.


.

Linear Algebra Essentials: Vectors, Matrices, and Transformations

0
linear algebra

Table of Contents

  1. Introduction
  2. Scalars, Vectors, and Vector Spaces
  3. Linear Combinations and Span
  4. Linear Independence and Basis
  5. Matrices and Matrix Operations
  6. Linear Transformations
  7. The Rank of a Matrix
  8. Systems of Linear Equations and Gaussian Elimination
  9. Determinants and Their Properties
  10. Inverse of a Matrix
  11. Eigenvalues and Eigenvectors
  12. Diagonalization and Jordan Form
  13. Inner Product Spaces and Orthogonality
  14. Gram-Schmidt Process and Orthonormal Bases
  15. Applications in Physics and Data Science
  16. Conclusion

1. Introduction

Linear algebra is the study of vectors, vector spaces, and linear transformations between them. It forms the mathematical foundation for much of physics, engineering, computer science, and data science. This article provides a detailed primer on the core ideas of linear algebra, focusing on intuition, structure, and real-world relevance.


2. Scalars, Vectors, and Vector Spaces

  • A scalar is a real (or complex) number.
  • A vector is an ordered list of numbers (components), often interpreted as a direction and magnitude.
  • A vector space is a set of vectors closed under vector addition and scalar multiplication.

A set \( V \) is a vector space if for any \( \vec{u}, \vec{v} \in V \) and scalar \( c \in \mathbb{R} \), we have:

\[
\vec{u} + \vec{v} \in V, \quad c \vec{v} \in V
\]


3. Linear Combinations and Span

A linear combination of vectors \( \vec{v}_1, \dots, \vec{v}_n \) is:

\[
a_1 \vec{v}_1 + a_2 \vec{v}_2 + \dots + a_n \vec{v}_n
\]

The span of a set is the collection of all linear combinations of those vectors. It forms a subspace of the vector space.


4. Linear Independence and Basis

Vectors are linearly independent if:

\[
a_1 \vec{v}_1 + a_2 \vec{v}_2 + \dots + a_n \vec{v}_n = \vec{0} \Rightarrow a_1 = a_2 = \dots = a_n = 0
\]

A basis is a linearly independent set of vectors that spans the space. The number of basis vectors is the dimension.


5. Matrices and Matrix Operations

A matrix is a rectangular array of numbers that represents a linear transformation. Key operations:

  • Addition: element-wise
  • Scalar multiplication: scaling every element
  • Matrix multiplication: composition of transformations
  • Transpose: swapping rows and columns

6. Linear Transformations

A linear transformation \( T: \mathbb{R}^n \to \mathbb{R}^m \) satisfies:

\[
T(a\vec{v} + b\vec{w}) = aT(\vec{v}) + bT(\vec{w})
\]

Every linear transformation can be represented by a matrix.


7. The Rank of a Matrix

The rank of a matrix is the dimension of the image (range) of its associated linear transformation.

  • Equals the number of linearly independent rows or columns
  • Determines the number of solutions to linear systems

8. Systems of Linear Equations and Gaussian Elimination

A linear system can be written as \( A \vec{x} = \vec{b} \)

  • Gaussian elimination transforms \( A \) into row echelon form
  • Back substitution finds solutions

The number of solutions depends on the rank and consistency of the system.


9. Determinants and Their Properties

The determinant \( \det(A) \) is a scalar associated with a square matrix.

  • \( \det(A) = 0 \): matrix is singular (not invertible)
  • \( \det(AB) = \det(A)\det(B) \)

It also gives the scaling factor for volume under transformation.


10. Inverse of a Matrix

A square matrix \( A \) is invertible if there exists a matrix \( A^{-1} \) such that:

\[
AA^{-1} = A^{-1}A = I
\]

Used to solve systems of equations: \( \vec{x} = A^{-1} \vec{b} \)


11. Eigenvalues and Eigenvectors

For a matrix \( A \), an eigenvector \( \vec{v} \) satisfies:

\[
A\vec{v} = \lambda \vec{v}
\]

Where \( \lambda \) is the eigenvalue. They describe scaling directions under transformations.


12. Diagonalization and Jordan Form

If a matrix has \( n \) linearly independent eigenvectors, it can be diagonalized:

\[
A = PDP^{-1}
\]

Where \( D \) is diagonal and \( P \) contains eigenvectors. Otherwise, Jordan canonical form is used.


13. Inner Product Spaces and Orthogonality

An inner product defines angles and lengths:

\[
\langle \vec{u}, \vec{v} \rangle = \sum u_i v_i
\]

Two vectors are orthogonal if their inner product is zero.


14. Gram-Schmidt Process and Orthonormal Bases

Used to convert a linearly independent set into an orthonormal basis:

  • Orthogonal: vectors at right angles
  • Normalized: unit length

Useful in numerical methods and quantum mechanics.


15. Applications in Physics and Data Science

  • Quantum mechanics: state vectors and operators
  • Classical mechanics: moment of inertia tensors
  • Machine learning: PCA and dimensionality reduction
  • Computer graphics: transformations and projections
  • Signal processing: Fourier analysis via linear algebra

16. Conclusion

Linear algebra is a foundational tool in both theoretical and applied sciences. Mastery of vectors, matrices, transformations, and eigen-decomposition enables powerful analysis across physics, data science, engineering, and beyond.


.

Today in History – 4 June

1
Today in History 4 June

Today in History 4 June

1903

Gandhiji launched the weekly journal “Indian Opinion” organised at Phoenix farm near Dur.

1911

Gold was discovered in Alaska’s Indian Creek.

1926

Dalai Lama introduced a tax on ears for Tibetans to equip the army. Those with only one ear would pay half the tax.

1941

National Seva Dal was established.

1947

Hindi daily ‘Nai Duniya’ started publication at Indore, Madhya Pradesh.

1953

University of Cambridge conferred honorary Doctorate of Laws on J. L. Nehru.

1955

Ministry of Iron & Steel inaugurated.

1958

The three-men Indian Mountaineering group successfully climbed the high Peak in the Garhwal Hills Range.

1972

First Environment Day.

1987

A Swedish Government inquiry found that Bofors had paid commission to middlemen for concluding Arms Purchase Agreement with India.

1994

Army successfully test-fires the short range surface-to-surface missile, ‘Prithvi’.

1997

Fourth Indian Remote Sensing Satellite (IRS-1D), carring advanced remote sensing cameras launched by India’s Polar Satellite Launch Vehicle (PSLV-1D), in service.

1997

Indian National Satellite (INSAT-2D), fourth satellite in INSAT-2 series, launched.

1997

Defence Ministry denied Washington Post’s report that India had deployed Prithvi missile on Punjab border with Pakistan.

Also Read:

Today in History – 3 June

Today in History – 2 June

Today in History – 1 June

Today in History – 31 May

Calculus of Variations: Finding Functions that Optimize Functionals

0
calculus variations

Table of Contents

  1. Introduction
  2. What Is the Calculus of Variations?
  3. Functionals and Their Extremization
  4. The Euler-Lagrange Equation
  5. Derivation of the Euler-Lagrange Equation
  6. Boundary Conditions
  7. Examples of Variational Problems
  8. Variational Principles in Physics
  9. Lagrangian Mechanics and the Principle of Least Action
  10. Constraints and the Lagrange Multipliers
  11. Variations with Several Functions and Higher Derivatives
  12. Hamilton’s Principle
  13. Noether’s Theorem and Symmetries
  14. Applications in Physics and Engineering
  15. Conclusion

1. Introduction

The calculus of variations is a mathematical method used to find the function (or functions) that makes a given quantity — usually an integral — stationary (minimum, maximum, or saddle point). It is the foundation of classical mechanics, optics, and many areas of physics and engineering.


2. What Is the Calculus of Variations?

Unlike traditional calculus, which finds the extrema of functions, the calculus of variations seeks the extrema of functionals — mappings from a space of functions to the real numbers.

A functional typically has the form:

\[
J[y] = \int_{a}^{b} L(x, y(x), y'(x)) \, dx
\]

Our goal: find a function \( y(x) \) such that \( J[y] \) is minimized (or maximized).


3. Functionals and Their Extremization

Given a functional \( J[y] \), we consider small variations of \( y \):

\[
y(x) \to y(x) + \epsilon \eta(x)
\]

where \( \eta(x) \) is an arbitrary differentiable function vanishing at the endpoints: \( \eta(a) = \eta(b) = 0 \), and \( \epsilon \) is small.


4. The Euler-Lagrange Equation

The central result of the calculus of variations is the Euler-Lagrange equation:

\[
\frac{\partial L}{\partial y} – \frac{d}{dx} \left( \frac{\partial L}{\partial y’} \right) = 0
\]

Any function \( y(x) \) that extremizes the functional must satisfy this differential equation.


5. Derivation of the Euler-Lagrange Equation

Start with:

\[
J[y + \epsilon \eta] = \int_a^b L(x, y + \epsilon \eta, y’ + \epsilon \eta’) dx
\]

Differentiate with respect to \( \epsilon \), set \( \epsilon = 0 \), and use integration by parts. The vanishing of the first variation \( \delta J \) leads directly to the Euler-Lagrange equation.


6. Boundary Conditions

  • Fixed endpoints: \( y(a) \) and \( y(b) \) fixed → standard Euler-Lagrange
  • Free endpoints: Leads to natural boundary conditions:

\[
\left. \frac{\partial L}{\partial y’} \right|_{x=a}^{x=b} = 0
\]


7. Examples of Variational Problems

  • Shortest path between two points: yields a straight line
  • Brachistochrone problem: find the curve of fastest descent
  • Catenary: shape of a hanging chain under gravity

Each problem uses the Euler-Lagrange equation to derive the solution function.


8. Variational Principles in Physics

Many physical laws arise from variational principles. These include:

  • Fermat’s principle (optics)
  • Hamilton’s principle (mechanics)
  • Least action principle (field theory)

9. Lagrangian Mechanics and the Principle of Least Action

In Lagrangian mechanics, the action is:

\[
S = \int_{t_1}^{t_2} L(q, \dot{q}, t) \, dt
\]

Where \( L = T – V \) is the Lagrangian. The path taken by a system between two configurations is the one for which \( S \) is stationary.


10. Constraints and the Lagrange Multipliers

When constraints \( f_i(x, y, y’) = 0 \) are present, we use Lagrange multipliers:

\[
J[y] = \int_a^b \left( L + \lambda f \right) dx
\]

The resulting Euler-Lagrange equations now include terms involving \( \lambda \).


11. Variations with Several Functions and Higher Derivatives

For multiple functions \( y_i(x) \), we get a system of Euler-Lagrange equations:

\[
\frac{\partial L}{\partial y_i} – \frac{d}{dx} \left( \frac{\partial L}{\partial y_i’} \right) = 0
\]

For higher derivatives (e.g., \( y”(x) \)), the equation generalizes accordingly.


12. Hamilton’s Principle

Hamilton’s principle states:

The actual path taken by a physical system between two times is the one that minimizes the action.

This forms the bridge to Hamiltonian mechanics and quantum field theory.


13. Noether’s Theorem and Symmetries

Noether’s theorem connects symmetries of the action to conserved quantities:

  • Time symmetry → energy conservation
  • Space symmetry → momentum conservation
  • Rotational symmetry → angular momentum conservation

It is a profound result rooted in the calculus of variations.


14. Applications in Physics and Engineering

  • Deriving equations of motion in classical mechanics
  • Field equations in electromagnetism and general relativity
  • Optimal control in engineering
  • Shape optimization in mechanical structures

15. Conclusion

The calculus of variations is a powerful tool that unifies physics, mathematics, and engineering. From shortest paths to fundamental laws, it provides the framework to derive governing equations from extremal principles.

Understanding variational methods is essential for theoretical physicists, applied mathematicians, and engineers alike.


.