Building on the previous work on the 1D Schrödinger equation, I now focus to a two-dimensional system. The time-dependent Schrödinger equation (TDSE) in two dimensions is given by:
i\hbar \frac{\partial \psi(x,y,t)}{\partial t} = \left[ -\frac{\hbar^2}{2m} \left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right) + V(x,y,t) \right] \psi(x,y,t) = \mathbf{H} \psi(x,y,t)
where:
Similar to the 1D case, analytical solutions to the TDSE in 2D are limited to a few simple potentials. For more complex potential landscapes, numerical methods are necessary to approximate the wave function \psi(x,y,t). This extension to two dimensions will provide a framework for simulating more realistic quantum systems with spatially varying potentials in both x and y directions.
To extend the discretization of the Schrödinger equation to two dimensions, we consider the Hamiltonian operator \mathbf{H}, which now includes second derivatives with respect to both x and y.
In 2D, the second derivatives of the wave function \psi(x, y, t) with respect to x and y can be approximated using finite differences as follows:
\begin{aligned} \frac{\partial^2 \psi(x,y,t)}{\partial x^2} & \approx \frac{\psi(x + \Delta x, y, t) - 2\psi(x, y, t) + \psi(x - \Delta x, y, t)}{\Delta x^2} \\ \frac{\partial^2 \psi(x,y,t)}{\partial y^2} &\approx \frac{\psi(x, y + \Delta y, t) - 2\psi(x, y, t) + \psi(x, y - \Delta y, t)}{\Delta y^2} \end{aligned}
The Hamiltonian operator \mathbf{H} in 2D can then be written as:
\mathbf{H} = -\frac{\hbar^2}{2m} \left( \frac{\partial^2}{\partial x^2} + \frac{\partial^2}{\partial y^2} \right) + V(x,y,t)
Substituting the finite difference approximations for the second derivatives, we obtain a discretized form of the Hamiltonian:
\mathbf{H}_{i,j} = \frac{-\hbar^2}{2m} \left[ \frac{\psi_{i+1,j} - 2\psi_{i,j} + \psi_{i-1,j}}{\Delta x^2} + \frac{\psi_{i,j+1} - 2\psi_{i,j} + \psi_{i,j-1}}{\Delta y^2} \right] + V_{i,j} \psi_{i,j}
The resulting Hamiltonian matrix for the 2D system, incorporating both kinetic and potential energy terms, can be constructed as a block matrix, where each block corresponds to a specific spatial coordinate:
\mathbf{H} = \begin{bmatrix} \mathbf{T}_1 + \mathbf{V}_1 & \mathbf{O} & \mathbf{O} & \cdots & \mathbf{O} \\ \mathbf{O} & \mathbf{T}_2 + \mathbf{V}_2 & \mathbf{O} & \cdots & \mathbf{O} \\ \mathbf{O} & \mathbf{O} & \mathbf{T}_3 + \mathbf{V}_3 & \cdots & \mathbf{O} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \mathbf{O} & \mathbf{O} & \mathbf{O} & \cdots & \mathbf{T}_N + \mathbf{V}_N \end{bmatrix}
Here, \mathbf{T}_i represents the kinetic energy contributions, and \mathbf{V}_i represents the potential energy contributions at each grid point.
The potential energy matrix \mathbf{V} is a diagonal matrix, with diagonal elements corresponding to the potential energy V(x_i, y_j) at each grid point (x_i, y_j). The kinetic energy matrix \mathbf{T} is formed by the finite difference approximations for the second derivatives, with non-diagonal elements representing the interactions between neighboring grid points.
def initialize_simulation(self):
self.psi = self.psi_0(self.X, self.Y).flatten()
self.L = self.create_laplacian_matrix()
self.V_matrix = sparse.diags(self.V(self.X, self.Y).flatten())
I_M = sparse.eye(self.Nx * self.Ny)
self.A = I_M + 0.5j * self.dt * (self.L - self.V_matrix)
self.B = I_M - 0.5j * self.dt * (self.L - self.V_matrix)
def psi_0(self, x, y):
return np.exp(-((x - self.x0)**2 / (2 * self.sigma_x**2)
+ (y - self.y0)**2 / (2 * self.sigma_y**2)
)) * np.exp(1j * (self.kx * x + self.ky * y))
def V(self, x, y):
V_real = np.zeros_like(x)
if cfg.infinite_barrier:
return V_real
else:
width_x = 0.1 * (self.x_max - self.x_min)
width_y = 0.1 * (self.y_max - self.y_min)
strength = 5.0
V_imag = np.zeros_like(x)
V_imag += strength * (1 - np.tanh((x - self.x_min) / width_x)**2)
V_imag += strength * (1 - np.tanh((self.x_max - x) / width_x)**2)
V_imag += strength * (1 - np.tanh((y - self.y_min) / width_y)**2)
V_imag += strength * (1 - np.tanh((self.y_max - y) / width_y)**2)
return V_real + 1j * V_imag
def create_laplacian_matrix(self):
if cfg.periodic_boundary:
diags = [-self.Ny, -1, 0, 1, self.Ny]
data = [
np.ones(self.Nx * self.Ny) / self.dy**2,
np.ones(self.Nx * self.Ny) / self.dx**2,
-2 * np.ones(self.Nx * self.Ny) * (
1 / self.dx**2 + 1 / self.dy**2),
np.ones(self.Nx * self.Ny) / self.dx**2,
np.ones(self.Nx * self.Ny) / self.dy**2
]
data[1][-1::self.Ny] = data[3][0::self.Ny] = 1 / self.dx**2
data[0][-self.Ny:] = data[4][:self.Ny] = 1 / self.dy**2
return sparse.diags(
data, diags, shape=(self.Nx * self.Ny, self.Nx * self.Ny),
format='csr')
else:
L = sparse.lil_matrix((self.Nx * self.Ny, self.Nx * self.Ny))
for i in range(self.Ny):
for j in range(self.Nx):
index = i * self.Nx + j
L[index, index] = -2 * (1 / self.dx**2 + 1 / self.dy**2)
if i > 0:
L[index, index - self.Nx] = 1 / self.dy**2
if i < self.Ny - 1:
L[index, index + self.Nx] = 1 / self.dy**2
if j > 0:
L[index, index - 1] = 1 / self.dx**2
if j < self.Nx - 1:
L[index, index + 1] = 1 / self.dx**2
return L.tocsr()
This code initializes the wave function, potential, and Hamiltonian matrices, setting up the system for time evolution under the 2D Schrödinger equation.
In the 2D Schrödinger equation simulation, three types of boundary conditions are implemented: periodic, hard wall, and permeable wall. Each type of boundary condition affects how the wave function behaves at the edges of the simulation domain.
Periodic boundary conditions imply that the wave function is continuous across the boundaries, meaning that the wave function on one edge of the domain connects seamlessly with the wave function on the opposite edge. Mathematically, this can be expressed as:
\begin{aligned} & \psi(x_{\text{min}}, y, t) = \psi(x_{\text{max}}, y, t)\\ & \psi(x, y_{\text{min}}, t) = \psi(x, y_{\text{max}}, t) \end{aligned}
In the implementation, periodic boundary conditions are handled by setting the elements of the Laplacian matrix such that the first and last grid points in each dimension are connected, forming a loop. This is seen in the code by setting the diagonal elements at the boundaries to reflect the connections to the opposite edges.
data[1][-1::self.Ny] = data[3][0::self.Ny] = 1 / self.dx**2
data[0][-self.Ny:] = data[4][:self.Ny] = 1 / self.dy**2
This ensures that the wave function wraps around at the boundaries, simulating a periodic system.
Hard wall boundary conditions simulate an impenetrable barrier at the edges of the simulation domain, causing the wave function to reflect entirely when it reaches the boundaries. This condition effectively confines the particle within the domain, with no probability of finding it outside.
Mathematically, the wave function satisfies:
\begin{aligned} & \psi(x_{\text{min}}, y, t) = \psi(x_{\text{max}}, y, t) = 0 \\ & \psi(x, y_{\text{min}}, t) = \psi(x, y_{\text{max}}, t) = 0 \end{aligned}
In the implementation, hard wall boundary conditions are often implicitly handled by the structure of the numerical scheme. When no special treatment is given at the boundaries (such as setting V(x, y) = 0 everywhere), the wave function naturally rebounds upon reaching the boundary due to the sharp gradient imposed by the edges of the computational domain. This is because the numerical method inherently assumes that the wave function outside the domain is zero, causing reflections similar to what happens with an infinite potential barrier.
Thus, even if V(x, y) = 0 within the domain, the wave function will reflect at the boundaries as if encountering a hard wall, unless additional measures (like an absorbing boundary condition) are implemented. This behavior is the result of the kinetic energy term in the Hamiltonian and the way the numerical Laplacian operator handles the edges of the grid.
Permeable wall boundary conditions allow the wave function to partially transmit through the boundaries, simulating a system where the walls are not completely impenetrable. This is achieved by introducing an imaginary component to the potential V(x, y) near the boundaries, which absorbs the wave function, simulating a gradual decay as the wave approaches the boundary.
There are different way to implement the boundary conditions, modifying the potential V(x, y) near the boundaries.
One option is to use \tanh function for the imaginary part of the potential will cause the wavefunction to decay near the boundaries, effectively absorbing it.
V_imag = np.zeros_like(x)
V_imag += strength * (1 - np.tanh((x - self.x_min) / width_x)**2)
V_imag += strength * (1 - np.tanh((self.x_max - x) / width_x)**2)
V_imag += strength * (1 - np.tanh((y - self.y_min) / width_y)**2)
V_imag += strength * (1 - np.tanh((self.y_max - y) / width_y)**2)
This potential adds an imaginary component that effectively dampens the wave function as it approaches the boundaries, allowing for some transmission through the boundaries but with a loss of amplitude.
Another option is the smoother CAP function: instead of \tanh, it is possible to use a polynomial function that starts at zero and smoothly increases towards the boundaries.
A quadratic form is commonly used, but it’s not the only option.
I implemented: - Quadratic (V(x) \propto x^2): it’s simple and often effective; - Cubic (V(x) \propto x^3): this provides a smoother onset of absorption; - Quartic (V(x) \propto x^3): this gives an even smoother onset but might require a wider absorbing region.
A more general form of polynomial CAP can be written as:
V(x) = -i\eta\left(\frac{x}{L}\right)^n where:
I didn’t implemented the generic formula, but I instead added an optimal CAP version; research has shown that the optimal CAP for minimizing reflections is actually not a simple polynomial, but a function derived from the Wentzel-Kramers-Brillouin (WKB) approximation:
V(x) = -i\eta \left[1 + \operatorname{erf}\left(\frac{ax}{L} - 1\right)\right]
where:
def cap(z, z_min, z_max, width):
z_range = z_max - z_min
cap_width = width * z_range
left = np.maximum(0, (z_min + cap_width - z) / cap_width)
right = np.maximum(0, (z - (z_max - cap_width)) / cap_width)
match cfg.cap_type:
case 0:
# quadratic
return cfg.absorbing_strength * (left**2 + right**2)
case 1:
# cubic
return cfg.absorbing_strength * (left**3 + right**3)
case 2:
# quartic
return cfg.absorbing_strength * (left**4 + right**4)
case 3:
# optimal
a = cfg.cap_opt_a
left_optimal = 0.5 * cfg.absorbing_strength * (
1 + erf(a * left - 1))
right_optimal = 0.5 * cfg.absorbing_strength * (
1 + erf(a * right - 1))
return left_optimal + right_optimal
case _:
raise ValueError("Unknown CAP type "
f"{cfg.cap_type}")
def V():
V_imag = np.zeros_like(x)
...
V_imag += cap(x, self.x_min, self.x_max,
cfg.absorbing_width_x)
V_imag += cap(y, self.y_min, self.y_max,
cfg.absorbing_width_y)
In the 2D case, the temporal discretization of the Schrödinger equation follows a similar process as in the 1D case, using the Crank-Nicolson method for stable and accurate time evolution.
The time-dependent Schrödinger equation in two dimensions is:
i\hbar \frac{\partial \psi(x, y, t)}{\partial t} = \mathbf{H} \psi(x, y, t)
Where \mathbf{H} is the Hamiltonian matrix that includes both the kinetic and potential energy components.
To discretize the time, we use a time step \Delta t. The Crank-Nicolson method is applied to average the Hamiltonian at the current and next time steps. The time evolution operator for the Crank-Nicolson method is given by:
\left( \mathbf{I} - \frac{i\Delta t}{2\hbar} \mathbf{H} \right) \psi^{n+1} = \left( \mathbf{I} + \frac{i\Delta t}{2\hbar} \mathbf{H} \right) \psi^n
This can be written as:
\mathbf{A} \psi^{n+1} = \mathbf{B} \psi^n
Where:
\begin{aligned} & \mathbf{A} = \mathbf{I} - \frac{i\Delta t}{2\hbar} \mathbf{H} \\ & \mathbf{B} = \mathbf{I} + \frac{i\Delta t}{2\hbar} \mathbf{H} \end{aligned}
In the 2D case, the Hamiltonian matrix \mathbf{H} is constructed using the Laplacian and potential energy terms, similar to the 1D case. The code for temporal discretization and evolution is straightforward, utilizing the Crank-Nicolson method:
def initialize_simulation(self):
self.psi = self.psi_0(self.X, self.Y).flatten()
self.L = self.create_laplacian_matrix()
self.V_matrix = sparse.diags(self.V(self.X, self.Y).flatten())
I_M = sparse.eye(self.Nx * self.Ny)
self.A = I_M + 0.5j * self.dt * (self.L - self.V_matrix)
self.B = I_M - 0.5j * self.dt * (self.L - self.V_matrix)
def compute(self):
self.perc = 0
for i in range(self.steps):
self.psi = spsolve(self.A, self.B @ self.psi)
This approach provides a stable and accurate numerical method for solving the time-dependent Schrödinger equation in two dimensions, with the Crank-Nicolson method ensuring that the simulation remains stable even for relatively large time steps.
To effectively simulate the dynamics of a quantum particle in two dimensions, a 2D Gaussian wavepacket is commonly used as the initial wavefunction. This wavepacket allows for precise control over the initial position and momentum of the particle in both spatial dimensions.
The 2D Gaussian wavepacket is defined as:
\psi(x, y, 0) = \left( \frac{1}{2\pi\sigma_x \sigma_y} \right)^{1/2} \exp\left(-\frac{(x - x_0)^2}{4\sigma_x^2} - \frac{(y - y_0)^2}{4\sigma_y^2}\right) \exp\left(i\frac{k_x x + k_y y}{\hbar}\right)
Where:
import numpy as np
def create_wavepacket_2d(x, y, x0, y0, sigma_x, sigma_y, kx, ky, hbar):
# Gaussian shape in both x and y
gaussian = np.exp(-((x - x0)**2 / (2 * sigma_x**2) + (y - y0)**2 / (2 * sigma_y**2)))
# Adding the wave component (momentum in x and y)
psi0 = gaussian * np.exp(1j * (kx * x + ky * y) / hbar)
# Normalize the wavepacket
norm = np.sqrt(np.sum(np.abs(psi0)**2))
psi0 /= norm
return psi0
In the 2D Schrödinger solver, the initial wavefunction can be set using this 2D Gaussian wavepacket:
def psi_0(self, x, y):
return np.exp(-((x - self.x0)**2 / (2 * self.sigma_x**2)
+ (y - self.y0)**2 / (2 * self.sigma_y**2)
)) * np.exp(1j * (self.kx * x + self.ky * y))
This function initializes the wavefunction \psi(x, y, 0) with a Gaussian envelope and an initial phase corresponding to the initial momentum. The wavepacket will evolve over time according to the Schrödinger equation, simulating the quantum dynamics of a particle under the influence of the specified potential.
The program allows for the visualization of different aspects of the wavefunction during the simulation. Depending on the selected parameters, you can visualize the probability density, the real and imaginary parts of the wavefunction, or the modulus and phase of the wavefunction.
The probability density |\psi|^2 represents the likelihood of finding a particle in a particular region of space. It is calculated as:
|\psi(x, y)|^2 = \bar{\psi}(x, y) \cdot \psi(x, y)
# Plot the probability distribution
img = ax.imshow(
np.abs(self.psi.reshape(self.Ny, self.Nx))**2,
extent=[self.x_min, self.x_max, self.y_min, self.y_max],
cmap='hot')
Alternatively, the modulus and phase of the wavefunction can be visualized. The modulus represents the magnitude of the wavefunction, while the phase is mapped to the HSL (Hue, Saturation, Lightness) color space.
\begin{aligned} & \text{Modulus: } |\psi(x, y)| \\ & \text{Phase: } \arg(\psi(x, y)) \end{aligned}
The phase is normalized to the range [0, 1] for HSL mapping:
\text{Normalized Phase: } \frac{\arg(\psi) + \pi}{2\pi}
# Plot the modulus of the wavefunction and the phase
psi = self.psi.reshape(self.Ny, self.Nx)
magnitude = np.abs(psi)
phase = np.angle(psi)
normalized_phase = (phase + np.pi) / (2 * np.pi)
# Create HSV image where the hue represents the phase and the value represents the magnitude
hsv_image = cm.hsv(normalized_phase)
hsv_image[..., 3] = np.clip(magnitude / np.nanmax(magnitude), 0, 1)
# Display the image
img = ax.imshow(
hsv_image, origin='lower',
extent=[self.x_min, self.x_max, self.y_min, self.y_max])
The complete code and results for the simulation of the time-dependent Schrödinger equation in 2D, including various potential types and visualization options, are available on GitHub. The repository contains detailed instructions on how to set up the environment, run the simulations, and visualize the results.
The repository is available at the following link here.