Usage
This guide details the complete workflow for using LetzElPhC, from generating the initial DFT data to performing the final electron-phonon matrix element calculations and interpolations.
The workflow is divided into three main sections:
- Running Basic DFT: Generating the necessary electronic and phononic data using Quantum Espresso.
- Running the Electron-Phonon Calculation: Computing the electron-phonon matrix elements on a coarse grid.
- Interpolation Calculation: Interpolating the results to a finer grid.
1. Running Basic DFT
Before using LetzElPhC, you must generate the necessary electronic and phononic databases using Quantum Espresso (QE).
1.1 SCF Calculation
Perform a self-consistent field (SCF) calculation with pw.x to find the ground state of your system.
1.2 DFPT Calculation (Phonons)
Run ph.x to obtain dynamical matrices and deformation potential due to phonons (\(\delta V_{SCF}\)) on a uniform q-point grid.
Warning
Ensure fildvscf is set to save the potential changes required by LetzElPhC.
The phonon q-grid must be commensurate with the k-grid used in the next step.
prefix_dvscf
&inputph
tr2_ph = 1.0d-14,
verbosity = 'high'
prefix = 'prefix',
fildvscf = 'prefix-dvscf', ! Essential: Saves potential changes
electron_phonon = 'dvscf',
fildyn = 'prefix.dyn',
ldisp = .true.,
nq1 = Nx, nq2 = Ny, nq3 = Nz
/
1.3 NSCF Calculation
Run pw.x (NSCF) to obtain wavefunctions on a uniform k-point grid.
* The k-grid must be commensurate with the phonon q-grid used in the DFPT calculation.
Shifted grids
Always avoid shifted grids
1.4 Yambo Initialization
Initialize the Yambo database from the NSCF results.
- Navigate to the NSCF
prefix.savedirectory. - Run
p2yto convert the QE data to Yambo format. - Run
yamboto initialize theSAVEfolder.
2. Running the Electron-Phonon Calculation
Once the basic DFT data is ready, you can compute the electron-phonon matrix elements using LetzElPhC.
2.1 Preprocessing
First, prepare the phonon data for LetzElPhC using the built-in preprocessor.
- Navigate to your phonon calculation directory.
-
Run the
lelphcpreprocessor:The flag
-pptells the code to run the preprocessor instead of electron-phonon calculation.PH.X_input_file: The input file used forph.xin step 1.2.- This creates a
ph_savedirectory containing the necessary files. - Optional: You can rename the output directory by setting the environment variable
ELPH_PH_SAVE_DIR.
2.2 Execution
Run the main LetzElPhC calculation.
- Ensure you have both the
SAVEdirectory (from step 1.4) and theph_savedirectory (from step 2.1). - Create an input file (e.g.,
lelphc.in). - Run the code, typically in parallel:
Input File Description (lelphc.in)
nkpool = 1
# Number of MPI pools for k-point parallelization
nqpool = 1
# Number of MPI pools for q-point parallelization
start_bnd = 1
# First band to include in the calculation
end_bnd = 40
# Last band to include in the calculation
save_dir = SAVE
# Path to the Yambo SAVE directory
ph_save_dir = ph_save
# Path to the phonon save directory generated by the preprocessor
kernel = dfpt
# Screening level:
# - dfpt: DFPT screening (total change in KS potential)
# - dfpt_local: DFPT screening excluding the non-local pseudo part
# - bare: No screening
# - bare_local: Local part of pseudo only
convention = standard
# Momentum transfer convention:
# - standard: <k+q|dV|k> (Stores derived quantities for k -> k+q)
# - yambo: <k|dV|k-q> (Stores derived quantities for k -> k-q)
2.3 Parallelization Scheme
LetzElPhC utilizes a hierarchical parallelization scheme involving q-pools, k-pools, and plane-wave distribution.
Requirement
The total number of CPUs must be divisible by nkpool * nqpool.
Example: Running on 12 CPUs with nkpool = 3 and nqpool = 2:
- Level 1 (q-pools): The 12 CPUs are divided into
nqpool = 2groups. Each group (q-pool) handles a subset of q-points.- Size: 12 / 2 = 6 CPUs per q-pool.
- Level 2 (k-pools): Each q-pool is further divided into
nkpool = 3sub-groups. Each sub-group (k-pool) handles a subset of k-points.- Size: 6 / 3 = 2 CPUs per k-pool.
- Level 3 (Plane Waves): The CPUs within each k-pool (2 CPUs) distribute the plane-waves components.
graph TD
Total["Total CPUs: 12"] --> QPool1["Q-Pool 1<br>(CPUs 1-6)<br>Subset of q-points"]
Total --> QPool2["Q-Pool 2<br>(CPUs 7-12)<br>Subset of q-points"]
QPool1 --> KPool1_1["K-Pool 1<br>(CPUs 1-2)"]
QPool1 --> KPool1_2["K-Pool 2<br>(CPUs 3-4)"]
QPool1 --> KPool1_3["K-Pool 3<br>(CPUs 5-6)"]
QPool2 --> KPool2_1["K-Pool 1<br>(CPUs 7-8)"]
QPool2 --> KPool2_2["K-Pool 2<br>(CPUs 9-10)"]
QPool2 --> KPool2_3["K-Pool 3<br>(CPUs 11-12)"]
KPool1_1 --> PW1["Plane Wave Dist."]
KPool1_2 --> PW2["Plane Wave Dist."]
KPool1_3 --> PW3["Plane Wave Dist."]
KPool2_1 --> PW4["Plane Wave Dist."]
KPool2_2 --> PW5["Plane Wave Dist."]
KPool2_3 --> PW6["Plane Wave Dist."]
2.4 Yambopy Workflow (Automated)
Yambopy can be conveniently used to call LetzElPhC and generate NetCDF databases that can be directly fed into the Yambo code. This automates the preprocessing and execution steps.
Requirements
- LetzElPhC installed and in your PATH.
- Yambopy installed.
Execution
Run the following command in the directory where the SAVE folder is located:
Examples
Serial Run To run LetzElPhC serially for bands from \(n_i\) to \(n_f\):
Parallel Run
For a calculation with 4 qpools and 2 kpools, specifying the executable path and avoiding deletion of logs:
Result
Check the SAVE directory for Yambo-compatible databases:
3. Interpolation Calculation
Computing phonons (dynamical matrices and deformation potentials) is often computationally demanding; therefore, it is necessary to rely on interpolation methods. This is commonly done for dynamical matrices and has become the default approach for obtaining phonon dispersions.
In this section, we discuss how to use the LetzElPhC code to generate dynamical matrices and deformation potentials by performing phonon calculations on a coarse grid and subsequently obtaining results on a much finer grid through Fourier interpolation.
The procedure is as follows
- Create an input file for interpolation (e.g.,
interpolate.in). - Run the
lelphcpreprocessor: -
A new ph_save directory is created, which can be directly used as input for the electron–phonon code to compute phonons on fine q-points, as usual.
The flag
-itells the code to run the interpolation program.
Input File Description (interpolate.in)
ph_save_dir = ph_save
# Directory containing the coarse grid phonon data
ph_save_interpolation_dir = ph_save_interpolation
# Directory where interpolation results will be saved
# After the interpolation run is done, you can give this
# directory to electron-phonon code to compute on fine grid.
interpolate_dvscf = true
# Whether to interpolate the deformation potential (dvscf)
nosym =false
# Remove symmetries. Can be true/false (or 1/0, .true./.false.)
asr = "simple"
# Type of ASR to apply (e.g., "simple")
loto = false
# If true, apply LO-TO splitting corrections at Gamma
loto_dir = 0.1 0.1 0
# Direction approaching Gamma point for LO-TO splitting
nq1 = 1
nq2 = 1
nq3 = 1
# Dimensions of the fine q-point grid for interpolation
write_dVbare = false
# Whether to write the local part of bare potential changes
eta_induced = 1.0
# Ewald parameter for screened interactions
eta_ph = 1.0
# Ewald parameter for phonon dynamical matrices
# qlist_file = ""
# Optional: File containing a specific list of q-points to interpolate