Matrices
Vector space
Definition
- A set of elements on which vector addition and scalar multiplication are defined
- Closed under these operations
- Includes identity for vector addition
Note:
- 2D spaces , since has a single point of .
Span
The set of all vectors that are linear combinations of any vectors of a set .
Linearly independent
None of the vectors is a linear combination of others, or
Basis
A set that spans and are linearly independent.
- The number of elements is the dimension of .
- Any vector can be expressed as a linear combination of the basis set in a unique way.
- Can have infinite dimensions - e.g. Fourier series
Change of bases
Primed / unprimed opposite to lecture notes convention.
- Idea: the same vector has different components in different bases.
where is the th component of in the unprimed basis, i.e.
Check by reversing the argument
Now,
Inverse
Define the transformations
Sub in,
True iff , similarly, . Hence, inverse transform is represented by inverse matrix.
Transforming matrices
Linear Operation
Without reference to any bases.
Inner Product
- Linear in 2nd argument
- Anti-linear in 1st argument
- Hermitian:
- Positive-definite:
Inner Product and bases
- Metric coefficient: , hermitian
Cauchy-Schwartz
Proof by considering , then choose appropriate phase of , so that is real and positive; appropriate length of .
Hermitian Conjugate
An operator is adjoint if .
Special square matrices
Type | Relation |
---|---|
Real Symmetric | |
Real Antisymmetric | |
Orthogonal | |
Hermitian | |
Anti-hermitian | |
Unitary | |
Normal |
Special results
- If A is Hermitian then iA is anti-Hermitian
- if A is Hermitian then exp(iA) is unitary
c.f. real number
Eigenvectors
Two things to remember:
- Always find Eigenvalues first, by solving characteristic equation .
- Roots
- n distinct roots - n linearly independent vectors
- m repeated roots - may be 1 to m vectors corresponding to it, and any linear combinations of those are Eigenvectors.
Derivation of Eigenvalue, Eigenvector properties of special matrices.
Diagonalisation
Diagonalisation is also a similarity transformation, into the Eigenvector basis.
- Only diagonalisable if the matrix has n independent eigenvectors.
Normal matrices can be diagonalised by unitary transformation. - Transformation between orthonormal bases is described by a unitary vector.
- Tr and Det are most easily studied in the diagonal form, invariant under similarity transformations.
Forms
Quadratic form
Any homogeneous quadratic funtion is a quadratic form of a symmetric matrix.
By diagonalisation, we can get a with no cross-terms.
- The eigenvectors of A defines the principal axes of the quadratic form.
- Positive (semi-)definite means all eigenvalues are larger than (or equal to) 0.
Quadratic surface
- Ellipsoids - same sign
Hyperboloids - different signs
- sphere
- - revolution about z’ (axis of symmetry)
- - conic section translated along z’
Hermitian Form
Similar idea for complex vector space.
Stationary property
Define Rayleigh quotient
- is a scalar
- When is eigenvector, is eigenvalue.
Rayleigh-Ritz
Eigenvalues of a matrix is the stationery values of the Rayleigh quotient.
Proof by considering .
Cartesian Tensor
Use the transformation matrix :
So the transformation law for a vector is (only this form makes sense for me…)
Definition in terms of transformation laws
Vector: a set of coefficents , defined wrt orthonormal basis , such that wrt another basis are given by .
Axial vectors:
Tensors:
Pseudo-tensor:
Examples
Everything is checked by the transformation law above.
- Delta is a 2nd order tensor
- Epsilon is a 3rd order pseudo tensor
- Inertia tensor:
Derive from the angular momentum definition.
Operations
- Addition
- Outer product
- Inner product (contraction)
Note: Cross-product is defined using the epsilon symbol, so any cross product is a pseudo-tensor.
- Symmetric and antisymmetric properties are invariant.
- Contraction of symmetric & antisymmetric indices .
Proof by using symmetry properties once and relabel once.
Second-order tensors
Two new things
Antisymmetric (3D)
Only three degrees of freedom, hence can be represented by a vector.
The vector is a dual vector, defined by
Symmetric (3D)
Can be further decomposed, uniquely, into a traceless symmetric matrix and identity.
Isotropic tensors
Tensors or pseudo-tensors with the same components in all frames.
- Scalars are isotropic
- No non-zero rank 1 isotropic tensor
- Rank 2 isotropic tensors are
Proof by (wlog) considering a rotation by z-axis and by y-axis.
- Rank 3 isotropic tensors are
- Rank 4 isotropic tensors are
Applications
- Since invariant, we can pick the most convenient frame (e.g eigenvector frame).
- Integrals
Integral 1
Relabel the integration variables as primed. Transform we find . But only isotropic rank 1 tensor is .
Integral 2
This is also isotropic, so .
Only need to do 1 integral.
Tensor fields
- , with components , is a vector in Cartesian only.
One idea (maybe) new
- The derivative of a second-order tensor is a rank 3 tensor field .