A ** linear vector space** V is a set of
elements, {V_{i}}, which may be added and multiplied by scalars {α_{i}} in such a way that

- the operation yields only elements of V (closure);
- addition and scalar multiplication obey the following rules:
- i) V
_{i}+ V_{j}= V_{j}+ V_{i }(commutativity); - ii) V
_{i}+ (V_{j}+ V_{k}) = (V_{i}+ V_{j}) + V_{k }(associativity); - iii) there exists a null vector, 0, in V such that 0 + V
_{i}= V_{i}+ 0 = V_{i}; - iv) for each vector V
_{i}there exists an inverse (-V_{i}) in V such that V_{i}+ (-V_{i}) = 0; - v) α(V
_{i}+ V_{j}) = αV_{i}+ αV_{j }; - vi) (α + β)V
_{i}= αV_{i}+ βV_{i}; - vii) α(βV
_{i}) = (αβ)V_{i}.

- i) V

The domain of allowed scalars is called the field F over which V is defined. (Examples: F consists of all real numbers, or F consists of all complex numbers.)

- Ordinary vectors in three-dimensional space;
- The set L
^{2}of square integrable functions ψ(**r**,t) defined by

∫_{all space }|ψ(**r**,t)|^{2}d^{3}r = finite.

A set of vectors {V_{1}, V_{2}, V_{3}, ...} is **linearly independent** (LI) if there exists no
linear relation of the form ∑_{i=1}^{n}α_{i}V_{i}
= 0,
except for the trivial one with all α_{i} = 0.

A vector space is **n-dimensional** if it
admits at most n LI vectors. The space of ordinary vectors in three-dimensional space is
3-dimensional. The space L^{2} is an infinite-dimensional vector space.

Given a set of n LI vectors in V^{n}, any other vector in V may be written as a
linear combination of these. The vectors **i**, **j**, **k** are one example of a set of 3 LI vectors in 3 dimensions.
One can
always choose such a set for every denumerably or non-denumerably infinite-dimensional
vector space. Any such set is called a **basis**
that spans V. The expansion coefficients are called the components
of a vector in this basis.

Assume {u_{i}(**r**), u_{i }∈ L^{2}}
forms a basis of L^{2}. Then every vector ψ
in L^{2}
may be written as

ψ(**r**) = ∑_{i}c_{i}u_{i}(**r**),

the c_{i} being the components of ψ(**r**)
in this basis.

If all vectors are expanded in a given basis then

- to add vectors, add their components;
- to multiply a vector by α, multiply each component by α.

The **inner product** is a scalar function of two vectors satisfying the following
rules:

- i) <V
_{i}|V_{i}> ≥ 0; - ii) <V
_{i}|V_{j}> = <V_{j}|V_{i}>^{*}; - iii) <V
_{i}|αV_{j}+ βV_{k}> = α<V_{i}|V_{j}> + β<V_{i}|V_{j}>.

Rule ii) and iii) combine to give <αV_{i} + βV_{j}|V_{k}>
= α^{*}<V_{i}|V_{k}> + β^{*}<V_{j}|V_{k}>.

A vector space with an inner product is called an **inner
product space**. The inner product in L^{2} is defined by

<Φ|ψ> = ∫Φ*(**r**) ψ(**r**) d^{3}r.

The **norm** of a vector V defined by
|V| = <V_{i}|V_{i}>^{1/2}.
A unit
vector has norm 1.

Two vectors are **orthogona**l if
their inner product vanishes. A set of vectors {V_{i}} is called
orthonormal
if <V_{i}|V_{j}> = δ_{ij}. Assume the vectors {u_{i}(**r**)}
are **orthonormal** and form a basis for L^{2}. Then

<u_{i}|u_{j}> = ∫u_{i}*(**r**) u_{j}(**r**)
d^{3}r = δ_{ij}.

ψ(**r**) = ∑_{i}c_{i}u_{i}(**r**),

<u_{j}|ψ> = ∫u_{j}*(**r**) ψ(**r**) d^{3}r
= ∑_{i}c_{i }∫u_{j}*(**r**) u_{i}(**r**)
d^{3}r = ∑_{i}c_{i }δ_{ji} = c_{j}.

The component c_{j} is therefore equal to the scalar product of u_{i}(**r**)
and ψ(**r**).

Let ψ(**r**) = ∑_{i}c_{i}u_{i}(**r**), Φ(**r**)
= ∑_{i}b_{i}u_{i}(**r**), then <Φ|ψ> = ∑_{i}b_{i}*c_{i}
and <ψ|ψ> = ∑_{i}|c_{i}|^{2}.

The norm can be expressed in terms of the components.

The inner product obeys the **
Schwarz inequality**
|<V_{i}|V_{j}>|^{2}
≤ |V_{i}|^{2}|V_{j}|^{2}.^{
}The norm obeys the **triangle inequality**
|V_{i} + V_{j}| ≤ |V_{i}| +|V_{j}|.

**Examples:**

- The set of functions v
_{p}= [1/(2πħ)^{1/2}]exp(ipx/ħ) ∉ L^{2}_{x}may be considered a basis not belonging to L^{2}_{x}, labeled by the continuous index p. We write

Ψ(x) = (2πħ)^{-1/2}∫ Ψ(p) exp(ipx/ħ) dp,

or

Ψ(x) = ∫ Ψ(p) v_{p}(x) dp.

ψ(x) is an element of L^{2}_{x}. {v_{p}(x)}, the set of all plane waves with different values of p = ħk, spans L^{2}_{x}. Here p is a continuous index between -∞ and +∞ which labels the various functions in the set. Every function in L^{2}_{x}can be expanded in one and only one way in terms of the v_{p}(x); Ψ(p) corresponds to the expansion coefficient c_{i}in a discretely labeled basis.

Ψ(p) = ∫Ψ(x) v_{p}*(x) dx, <ψ|ψ> = ∫|Ψ(p)|^{2}dp.

The set {v_{p}} is "orthonormalized in the Dirac sense".

<v_{p}|v_{p'}> = ∫v_{p}*(x) v_{p'}(x) dx = (2πħ)^{-1}∫ exp(i(p' - p)x/ħ) dx = δ(p - p'),

Properties of the Dirac δ function.

(2π)^{-1/2}∫ exp(ik(x - x_{0})) dk = δ(x - x_{0}). - If we define the δ function through the relationship

∫_{-∞}^{∞}f(x) δ(x) dx = f(0), or ∫_{-∞}^{∞}f(x) δ(x - x_{0}) dx = f(x_{0}),

then δ_{xo}= δ(x -x_{0}) may be considered a basis not belonging to L^{2}_{x}, labeled by the continuous index x_{0,}which spans L^{2}_{x}.

Ψ(x) = ∫_{-∞}^{∞}Ψ(x') δ(x - x') dx', where the expansion coefficient Ψ(x') is given by

Ψ(x') = ∫_{-∞}^{∞}δ(x' - x) Ψ(x) dx, <ψ|ψ> = ∫|Ψ(x')|^{2}dx'.

The basis {δ(x-x_{0}) } is "orthonormalized in the Dirac sense".

**Problem:**

Find the Fourier transform of the δ function δ(x- x_{0}).

Solution:

- Concepts:

The Fourier transform - Reasoning:

We are asked to find the Fourier transform of δ(x- x_{0}). - Details of the calculation:

δ_{xo}(p) = (2πħ)^{-1/2}∫_{-∞}^{∞}exp(-ipx/ħ) δ(x- x_{0}) dx = (2πħ)^{-1/2}exp(-ipx_{0}/ħ)

by the definition of the Fourier transform. In particular

δ_{xo}(0) = (2πħ)^{-1/2}.

The inverse Fourier transform then yields

δ(x - x_{0}) = (2πħ)^{-1/2}∫_{-∞}^{∞}exp(ipx/ħ) δ_{xo}(p) dp = (2πħ)^{-1}∫_{-∞}^{∞}exp(ipx/ħ) exp(-ipx_{0}/ħ) dp

= (2πħ)^{-1}∫_{-∞}^{∞}exp(ip(x - x_{0})/ħ) dp = (2π)^{-1}∫_{-∞}^{∞}exp(ik(x - x_{0})) dk.

This is an equivalent definition of the Dirac δ function.

Examples:

- An ordinary vector
**A**= a_{x}**i**+ a_{y}**j**+ a_{z}**k**= a_{x'}**i**' + a_{y'}**j**' + a_{z'}**k**' in three dimensional space may be represented by the components (a_{x}, a_{y}, a_{z}) or (a_{x'}, a_{y'}, a_{z'}) depending on the choice of basis (**i**,**j**,**k**) or (**i**',**j**',_{'}**k**'). But if we write**A**, - The quantum state of any physical system is characterized by a state vector, belonging
to a space E, which is the state space of the system.
If
Ψ(
**r**) ∈ L^{2}then |Ψ> ∈ E. We may consider Ψ(**r**) to be one specific representation of |Ψ>, namely the set of components in a particular basis δ(**r**),**r**playing the role of an index.

For additional notes on Dirac notation click **here**.

If ζ is a linear functional, then |ψ> ∈ V implies that there exist a complex number ζ, ζ ∈ F, such that

ζ(λ

The set of all linear functionals defined on V forms a vector space, which is called the dual space of V, denoted by V

Take the ket |Φ> = λ

<Φ|ψ> = <ψ|Φ>* = λ

The bra corresponding to |Φ> is <Φ| = λ

We therefore have that the bra corresponding to λ|ψ> = |λψ> is <λψ| = λ

Kets and bras are

**Subspaces
**Given a vector space V, a subset of its elements that form a vector space among
themselves is called a subspace of V.

- The vector space spanned by {
**i**,**j**} is a subspace of the space of ordinary vectors in 3 dimensions. - The space of sufficiently regular functions ψ(
**r**) in L^{2}is a subspace of L^{2}called L^{2}_{r}.

In quantum mechanics, it is postulated that all possible states of a system form a vector space, i.e. they can be manipulated with two operations called addition and multiplication, which obey the rules for addition and multiplication in a vector space. The operations are obviously different from the operations of adding and multiplying ordinary numbers.

**Inner-product spaces**
are vector spaces for which an additional operation is defined, namely taking
the inner product of two vectors. This
operation associates which each pair of vectors a scalar, i.e. a number, not a
vector. The operation also must
obey certain rules, but again, as long as it does obey the rules it can be
defined quite differently in different vector spaces.
The vector space of ordinary 3-d vectors is an inner-product space; the
inner product is the dot product.

The vector space of all possible states in QM
is not 3-dimensional, but infinite-dimensional.
It is called a Hilbert space and it is an inner-product space.
In Dirac notation the inner product of a vectors |ψ> with a vector |Φ> is denoted by the symbol <ψ|Φ>.
This symbol denotes a number, not a vector.
The inner product is quite different from ordinary multiplication, for
example <Φ|ψ> is not
equal to <ψ|Φ>, but
the inner product satisfies the rule for an inner-product space.

In Dirac notation kets represent the vectors.
To every ket corresponds exactly one bra.
There is a one to one correspondence.
|ψ> is a ket, the corresponding bra is <ψ|.
If |x> is a ket, the corresponding bra is <x|.

The vectors in the Hilbert space can be represented in
various representations, i.e. we can choose different bases, and give their
components along the basis vectors. If
we choose coordinate representation, the basis is the set of all vectors {|x>} and the component of a vector |ψ> along a
vector |x> is given by the inner product <x|ψ>
= ψ(x).
If we evaluate ψ(x) for all |x> we get the
wave function. Because we want to
interpret the square of the wave function as a probability density, we require
that the wave function can be normalized and that if we integrate the square of
the normalized wave function over all space we get 1.
The probability that we find the system somewhere in space is 1. We require that the wave function is square-integrable.
We therefore say that our Hilbert space is equivalent to the space of
square-integrable functions.

Some functions such as ψ(x) = cos(kx)
or ψ(x) = δ(x) are not
square-integrable, the integral over all space always yields infinity.
They therefore cannot represent real physical systems.
But they are mathematically convenient, and we pretend they belong and
treat them accordingly. The
coordinate representation of the ket |x> is δ(x),
the function is not square-integrable and |x> does not really belong to the
Hilbert space. (|x> represents a
system whose position is precisely known, and the uncertainty principle says
that we cannot have this.) The bra <x| belongs to the dual space, because we can form <x|ψ> according to the rules of the inner product. (This integral is finite and yields
a number). Therefore every good ket
has a corresponding good bra, but not every good bra has a corresponding good
ket. We generally do not worry
about this. We just pretend that |x> is a good ket in the Hilbert
space. The ket |p> represents a
system with precisely defined momentum, which is also forbidden by the
uncertainty principle. Its wave
function <x|p> = (2πħ)^{-1/2}exp(ipx/h) is
not square-integrable and strictly speaking |p> does not belong to the
Hilbert space. Again, we generally ignore this and pretend that it belongs.
Mathematical justification are tedious, but can be made.