Periodic Pitfalls

I recently hacked up a simulation of one of the simplest quantum dynamics problems – propagating a free particle in time. In the process, I came to appreciate something about the use of Fourier transforms in quantum dynamics that I did not previously understand. You can see how the wave function of the particle spreads out over time, like it is supposed to, but eventually develops a strange wave pattern that should not exist:

Gaussian wave packet at time zero.
Gaussian wave packet at time zero.
gaussian_end
Gaussian wave packet at end of simulation.

The free particle is a surprisingly interesting case in quantum mechanics because it has no stationary states; the solutions to the time-independent Schrodinger equation for the free particle are not normalizable, and thus do not correspond to physically realizable states. In practice, we obtain a wave mechanics description of the free particle that matches our classical intuition of a billiard ball flying through space by using a wave packet, an expansion in plane waves:

 \displaystyle \Psi(x) = \int dk X(k) e^{ikx}

Often in quantum mechanics one will see these types of expressions written with k transformed using the de Broglie relation p = \hbar k, but we will ignore those details here.

A particularly useful type of wave packet is the Gaussian wave packet:

\displaystyle \Psi(x) = e^{-\zeta x^2}

Because the momentum distribution corresponding to this wave packet is also a Gaussian, the Gaussian wave packet matches our classical ideas of a particle moving through space with position and momentum that are both known to within some uncertainty.

How do we know that the electrons emitted from some source are described by a Gaussian wave packet, and not any other type of function? Well, we can measure the momentum distribution of the particles when they hit the detector. There  is experimental and theoretical evidence that thermionic electrons obey a Gaussian momentum distribution.  The momentum operator commutes with the free-particle Hamiltonian, and we thus know that the momentum-space wave function of the particle (and thus the position-space wave function as well), was always Gaussian along its trajectory to the detector.

We propagate this Gaussian in time by using the standard rearrangement of the time-dependent Schrodinger equation for a time-independent Hamiltonian:

\displaystyle i \hbar \frac{d \Psi(x,t)}{dt} = \left(\hat{H}\Psi\right)(x,t_0) \Longrightarrow \Psi(x,t) = \left(e^{-\frac{i}{\hbar}\hat{H}t}\Psi\right)(x,t_0)

The simplicity of our Hamiltonian allows easy application of the propagator operator on the right-hand side using the plane-wave expansion of our wave function:

\begin{aligned}\Psi(x,t) &= \left(e^{-\frac{i}{\hbar} \frac{\hat{p}^2}{2m}t}\Psi\right)(x ,t_0) \\ &= \int dk\ X(k,t) e^{i\left(kx -\frac{\hbar k^2 t}{2m}\right)} \end{aligned}

The really cool part is that the momentum-space wave function can be computed easily by sampling values of the position space wave function along our interval and using the standard discrete Fourier transform numerical method to calculate the coefficients of the plane waves. We can see that the momentum space wave function calculated this way matches the exact result, and is constant in time as it should be:

momentum_wavefunction

 

However, having never worked in signal processing, I had never really pondered the nature of the object yielded by the DFT.  While the analytic FT yields a perfect representation of an aperiodic function, the DFT performed over any finite interval yields a representation with a period equal to the length of the interval.  

This means that when we evolve the wave packet in time, what we really do is evolve an infinite number of evenly-spaced wave packets. This comes with the territory of using the inverse Fourier transform to generate our position-space wave function at each time step – the result is intrinsically periodic:

\displaystyle \Psi(x+L,t) = \sum_{k=0}^{N-1} X_k(t) e^{2\pi i k\frac{(x+L)-x_0}{L}} =\sum_{k=0}^{N-1} X_k(t) e^{2\pi i k\frac{x-x_0}{L}}

We don’t get the choice of ignoring these other images. This can cause problems when the wave packet develops a significant amplitude near the edges of the interval,  as the wave packet images outside of our interval begin to “bleed” into it, corrupting our wave function.

After noticing this effect, I made a quick proof-of-concept demonstration. We begin  by initializing a Gaussian wave packet in the center of our interval and propagating it in time:

\displaystyle \Psi(x,t) = e^{-\frac{i}{\hbar} \frac{\hat{p}^2}{2m}t}e^{-\zeta x^2}

If we apply the Fourier transform and inverse Fourier transform in the course of propagating for one step, we can see that if we plot our new wave function on a larger interval, it has become periodic:

periodic

Running the simulation out further causes interference between the images to develop:

periodic_end

In practice, this effect is avoid by increasing the size of the box. In some cases, such as when modeling a truly periodic chemical system such as a crystal, the periodicity of the Fourier representation confers a great advantage.

The code used to make all of these plots is on GitHub.

 

 

Lagrange Interpolation I: Runge’s Phenomenon

Lagrange interpolation is one of the coolest numerical methods that I have ever studied; there is much beauty in its sublime simplicity. In the post I want to discuss the basic idea of the technique as well as a problem with the technique that has been missing from the textbooks that I have consulted thus far. We wish to solve the following problem: given some function f(x), can we construct a convenient approximate polynomial that reproduces the exact value of f(x) at some set of points \{x_i\}. Lagrange approached this problem by approximating f(x) as a linear combination of its values at the interpolation points:

 

\displaystyle P(x) \approx \sum_{k = 0}^{N} L_{n,k}(x) f(x_k)

 

Examining this functional form, we can see that in order to satisfy our requirement that \forall k\ P(x_k) = f(x_k), we need our functions L_{n,k}(x) to satisfy the following conditions:

\displaystyle L_{n,k}(x_j) = 1 \iff j = k

 

\displaystyle L_{n,k}(x_j) = 0 \iff j \neq k

 

Lagrange constructed the following functions to satisfy these conditions:

\displaystyle L_{n,k}(x) = \prod_{i \neq k} \frac{(x - x_i)}{(x_k - x_i)}

 

You can verify that this function satisfies the requirements above. For a cool illustration of how well this process can work, consider the figure below:

Interpolation of f(x) = cos(x) by Lagrange Polynomials constructed by successively more interpolation points.
Interpolation of f(x) = cos(x) by Lagrange Polynomials constructed by successively more interpolation points.

You can see that as we use more and more interpolation points, our result systematically converges toward the function that we are approximating:

Errors caused by Lagrange Interpolation using various numbers of points.
Errors caused by Lagrange Interpolation using various numbers of points.

Runge’s Phenomenon

An interesting property of Lagrange interpolation is that adding more interpolation points, and thus using a higher-degree polynomial to approximate our function, can actually lead to worse accuracy in some cases. This problem is known as Runge’s Phenomenon, and is illustrated in the figure below:

Non-uniform convergence in interpolating polynomials due to high oscillation near interval endpoints.
Non-uniform convergence in interpolating polynomials due to high oscillation near interval endpoints.

As you can see, we no longer obtain well-behaved convergence with higher-order polynomials:

Errors do not decrease uniformly with increasing degree of the interpolating polynomial.
Errors do not decrease uniformly with increasing degree of the interpolating polynomial.

This problem can be ameliorated by the use of better interpolation points such as the Chebyshev nodes, which I will discuss in a future post. The source code used to prepare these plots is available on my github.