User talk:Paul Wormer/scratchbook
Complex numbers in physics
Classical physics
Classical physics consists of classical mechanics, electromagnetic theory, and phenomenological thermodynamics. One can add Einstein's special and general theory of relativity to this list, although this theory, being formulated in the 20th century, is usually not referred to as "classical". In these four branches of physics the basic quantities and equations governing the behavior of the quantities are real.
Classical mechanics has three different, but equivalent, formulations. The oldest, due to Newton, deals with masses and position vectors of particles, which are real, as is time t. The first and second time derivatives of the position vectors enter Newton's equations and these are obviously real, too. The same is true for Lagrange's formulation of classical mechanics in terms of position vectors and velocities of particles and for Hamilton's formulation in terms of momenta and positions.
Maxwell equations, that constitute the basis of electromagnetic theory, are in terms of real vector operators (gradient, divergence, and curl) acting on real electric and magnetic fields.
Thermodynamics is concerned with concepts as internal energy, entropy, and work. Again, these properties are real.
The special theory of relativity is formulated in Minkowski space. Although this space is sometimes described as a 3-dimensional Euclidean space to which the axis ict (i is the imaginary unit, c is speed of light, t is time) is added as a fourth dimension, the role of i is non-essential. The imaginary unit is introduced as a pedestrian way to the computation of the indefinite, real, inner product that in Lorentz coordinates has the metric
which obviously is real. In other words, Minkowski space is a space over the real field ℝ. The general theory of relativity is formulated over real differentiable manifolds that are locally Lorentzian. Further, the Einstein field equations contain mass distributions that are real.
So, although the classical branches of physics do not need complex numbers, this does not mean that these numbers cannot be useful. A very important mathematical technique, especially for those branches of physics where there is flow (of electricity, heat, or mass) is Fourier analysis. The Fourier series is most conveniently formulated in complex form. Although it would be possible to formulate it in real terms (expansion in terms of sines and cosines) this would be cumbersome, given the fact that the application of the usual trigonometric formulas for the multiplication of sines and cosines is so much more difficult than the corresponding multiplication of complex numbers. Especially electromagnetic theory makes heavy use of complex numbers, but it must be remembered that the final results, that are to be compared with observable quantities, are real.
Quantum physics
In quantum physics complex numbers are essential. In the oldest formulation, due to Heisenberg the imaginary unit appears in an essential way through the canonical commutation relation
pi and qj are linear operators (matrices) representing the ith and jth component of the momentum and position of a particle, respectively,.
The time-dependent Schrödinger equation also contains i in an essential manner. For a free particle of mass m the equation reads
This equation may be compared to the wave equation that appears in several branches of classical physics
where v is the velocity of the wave. It is clear from this similarity why Schrödinger's equation is sometimes called the wave equation of quantum mechanics. It is also clear that the essential difference between quantum physics and classical physics is the first-order time derivative including the imaginary unit. The classical equation is real and has on the right hand side a second derivative with respect to time.
The more general form of the Schrödinger equation is
where H is the operator representing the energy of the quantum system under consideration. If this energy is time-independent (no time-dependent external fields interact with the system), the equation can be separated, and the imaginary unit enters fairly trivially through a so-called phase factor,
The second equation has the form of an operator eigenvalue equation. The eigenvalue E (one of the possible observable values of the energy) is real, which is a fairly deep consequence of the quantum laws.[1] The time-independent function Φ can very often be chosen to be real. The exception being the case that H is not invariant under time-reversal. Indeed, since the time-reversal operator θ is anti-unitary, it follows that
where the bar indicates complex conjugation. Now, if H is invariant,
then also the real linear combination is an eigenfunction belonging to E, which means that the wave function may be chosen real. If H is not invariant, it usually is transformed into minus itself. Then and belong to E and −E, respectively, so that they are essentially different and cannot be combined to real form. Time-reversal symmetry is usually broken by magnetic fields, which give rise to interactions linear in spin or orbital angular momentum.
Note
- ↑ If E were complex, two separate measurements would be necessary to determine it. One for its real and one for its imaginary part. Since quantum physics states that a measurement gives a collapse of the wave function to an undetermined state, the measurements, even if they would be made in quick succession, would interfere with each other and energy would be unobservable.
Virial theorem
In mechanics, a virial of a stable system of n particles is a quantity proposed by Rudolf Clausius in 1870. The virial is defined by
where Fi is the total force acting on the i th particle and ri is the position of the i th particle; the dot stands for an inner product between the two 3-vectors. The angular brackets indicate a long-time average. The importance of the virial arises from the virial theorem, which connects the virial to the long-time average ⟨ T ⟩ of the total kinetic energy T of the n-particle system,
Proof of the virial theorem
Consider the quantity G defined by
The vector pi is the momentum of particle i. Differentiate G with respect to time:
Use Newtons's second law and the definition of kinetic energy:
and it follows that
Averaging over time gives:
If the system is stable, G at time t = 0 and at time t = T is finite. Hence, if T goes to infinity, the quantity on the right hand side goes to zero. Alternatively, if the system is periodic with period T, G(T) = G(0) and the right hand side will also vanish. Whatever the cause, we assume that the time average of the time derivative of G is zero, and hence
which proves the virial theorem.
Application
An interesting application arises when each particle experiences a potential V of the form
where A is some constant (independent of space and time).
An example of such potential is given by Hooke's law with k = 2 and Coulomb's law with k = −1. The force derived from a potential is
Consider
Then applying this for i = 1, … n,
For instance, for a system of charged particles interacting through a Coulomb interaction:
It is of interest to remark that this result holds also quantum mechanically. That is, for a stable molecule with time-independent wave function 2T = −V.