Review
A linear vector space V is a set of elements, {Vi}, which may be added and multiplied by scalars {αi} in such a way that
The domain of allowed scalars is called the field F over which V is defined. (Examples: F consists of all real numbers, or F consists of all complex numbers.)
Examples of vector spaces:A set of vectors {V1, V2, V3, ...} is linearly independent (LI) if there exists no
linear relation of the form ∑i=1nαiVi
= 0,
except for the trivial one with all αi = 0.
A vector space is n-dimensional if it
admits at most n LI vectors. The space of ordinary vectors in three-dimensional space is
3-dimensional. The space L2 is an infinite-dimensional vector space.
Given a set of n LI vectors in Vn, any other vector in V may be written as a
linear combination of these. The vectors i, j, k are one example of a set of 3 LI vectors in 3 dimensions.
One can
always choose such a set for every denumerably or non-denumerably infinite-dimensional
vector space. Any such set is called a basis
that spans V. The expansion coefficients are called the components
of a vector in this basis.
Assume {ui(r), ui ∈ L2}
forms a basis of L2. Then every vector ψ
in L2
may be written as
ψ(r) = ∑iciui(r),
the ci being the components of ψ(r)
in this basis.
If all vectors are expanded in a given basis then
The inner product is a scalar function of two vectors satisfying the following rules:
Rule ii) and iii) combine to give <αVi + βVj|Vk>
= α*<Vi|Vk> + β*<Vj|Vk>.
A vector space with an inner product is called an inner
product space. The inner product in L2 is defined by
<Φ|ψ> = ∫Φ*(r) ψ(r) d3r.
The norm of a vector V defined by
|V| = <Vi|Vi>1/2.
A unit
vector has norm 1.
Two vectors are orthogonal if
their inner product vanishes. A set of vectors {Vi} is called
orthonormal
if <Vi|Vj> = δij. Assume the vectors {ui(r)}
are orthonormal and form a basis for L2. Then
<ui|uj> = ∫ui*(r) uj(r)
d3r = δij.
ψ(r) = ∑iciui(r),
<uj|ψ> = ∫uj*(r) ψ(r) d3r
= ∑ici ∫uj*(r) ui(r)
d3r = ∑ici δji = cj.
The component cj is therefore equal to the scalar product of ui(r)
and ψ(r).
Let ψ(r) = ∑iciui(r), Φ(r)
= ∑ibiui(r), then <Φ|ψ> = ∑ibi*ci
and <ψ|ψ> = ∑i|ci|2.
The norm can be expressed in terms of the components.
The inner product obeys the
Schwarz inequality
|<Vi|Vj>|2
≤ |Vi|2|Vj|2.
The norm obeys the triangle inequality
|Vi + Vj| ≤ |Vi| +|Vj|.
Examples:
Problem:
Find the Fourier transform of the δ function δ(x- x0).
Solution:
Examples:
For additional notes on Dirac notation click here.
The Dual SpaceSubspaces
Given a vector space V, a subset of its elements that form a vector space among
themselves is called a subspace of V.
Inner-product spaces are vector spaces for which an additional operation is defined, namely taking the inner product of two vectors. This operation associates which each pair of vectors a scalar, i.e. a number, not a vector. The operation also must obey certain rules, but again, as long as it does obey the rules it can be defined quite differently in different vector spaces. The vector space of ordinary 3-d vectors is an inner-product space; the inner product is the dot product.
The vector space of all possible states in QM
is not 3-dimensional, but infinite-dimensional.
It is called a Hilbert space and it is an inner-product space.
In Dirac notation the inner product of a vectors |ψ> with a vector |Φ> is denoted by the symbol <ψ|Φ>.
This symbol denotes a number, not a vector.
The inner product is quite different from ordinary multiplication, for
example <Φ|ψ> is not
equal to <ψ|Φ>, but
the inner product satisfies the rule for an inner-product space.
In Dirac notation kets represent the vectors.
To every ket corresponds exactly one bra.
There is a one to one correspondence.
|ψ> is a ket, the corresponding bra is <ψ|.
If |x> is a ket, the corresponding bra is <x|.
The vectors in the Hilbert space can be represented in
various representations, i.e. we can choose different bases, and give their
components along the basis vectors. If
we choose coordinate representation, the basis is the set of all vectors {|x>} and the component of a vector |ψ> along a
vector |x> is given by the inner product <x|ψ>
= ψ(x).
If we evaluate ψ(x) for all |x> we get the
wave function. Because we want to
interpret the square of the wave function as a probability density, we require
that the wave function can be normalized and that if we integrate the square of
the normalized wave function over all space we get 1.
The probability that we find the system somewhere in space is 1. We require that the wave function is square-integrable.
We therefore say that our Hilbert space is equivalent to the space of
square-integrable functions.
Some functions such as ψ(x) = cos(kx) or ψ(x) = δ(x) are not square-integrable, the integral over all space always yields infinity. They therefore cannot represent real physical systems. But they are mathematically convenient, and we pretend they belong and treat them accordingly. The coordinate representation of the ket |x> is δ(x), the function is not square-integrable and |x> does not really belong to the Hilbert space. (|x> represents a system whose position is precisely known, and the uncertainty principle says that we cannot have this.) The bra <x| belongs to the dual space, because we can form <x|ψ> according to the rules of the inner product. (This integral is finite and yields a number). Therefore every good ket has a corresponding good bra, but not every good bra has a corresponding good ket. We generally do not worry about this. We just pretend that |x> is a good ket in the Hilbert space. The ket |p> represents a system with precisely defined momentum, which is also forbidden by the uncertainty principle. Its wave function <x|p> = (2πħ)-1/2exp(ipx/h) is not square-integrable and strictly speaking |p> does not belong to the Hilbert space. Again, we generally ignore this and pretend that it belongs. Mathematical justification are tedious, but can be made.