GYACOMO (Gyrokinetic Advanced Collision Moment solver, 2021)
GYACOMO (Gyrokinetic Advanced Collision Moment solver)
Copyright (C) 2022 EPFL
Copyright (C) 2022 EPFL
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
...
@@ -15,6 +15,7 @@ This program is distributed in the hope that it will be useful, but WITHOUT ANY
...
@@ -15,6 +15,7 @@ This program is distributed in the hope that it will be useful, but WITHOUT ANY
You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.
You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.
Author: Antoine C.D. Hoffmann
Author: Antoine C.D. Hoffmann
Contact: antoine.hoffmann@epfl.ch (do not hesitate!)
# Citing GYACOMO
# Citing GYACOMO
If you use GYACOMO in your work, please cite (at least) one of the following paper:
If you use GYACOMO in your work, please cite (at least) one of the following paper:
...
@@ -33,13 +34,15 @@ It can be coupled with precomputed matrices from the code Cosolver (B.J. Frei) t
...
@@ -33,13 +34,15 @@ It can be coupled with precomputed matrices from the code Cosolver (B.J. Frei) t
This repository contains the solver source code (in /src) but also my personnal post-processing Matlab scripts, which are less documented. I would recommend the user to write their own post-processing scripts based on the H5 files the code outputs.
This repository contains the solver source code (in /src) but also my personnal post-processing Matlab scripts, which are less documented. I would recommend the user to write their own post-processing scripts based on the H5 files the code outputs.
#### GYACOMO can
#### GYACOMO can
- evolve kinetic electrons and ions
- run in parallel using MPI (mpirun -np N ./path_to_exec Np Ny Nz, where N = Np x Ny x Nz is the number of processes and Np Ny Nz are the parallel dimensions in Hermite polynomials, binormal direction, and parallel direction, respectively).
- use an adiabatic electrons model
- run in single precision (make gfsp).
- include perpendicular magnetic perturbation (Ampere's law)
- evolve kinetic electrons and ions.
- use Z-pinch and s-alpha geometry
- use an adiabatic electrons model.
- use the Miller geometry framework (elongation, triangularity etc.)
- include perpendicular magnetic perturbation (Ampere's law).
- use various experimental closures for the linear and nonlinear terms
- use Z-pinch and s-alpha geometry.
- add background ExB shear flow
- use the Miller geometry framework (elongation, triangularity etc.).
- use various experimental closures for the linear and nonlinear terms.
- add background ExB shear flow.
#### GYACOMO cannot (yet)
#### GYACOMO cannot (yet)
- include parallel magnetic fluctuations
- include parallel magnetic fluctuations
...
@@ -51,16 +54,15 @@ This repository contains the solver source code (in /src) but also my personnal
...
@@ -51,16 +54,15 @@ This repository contains the solver source code (in /src) but also my personnal
A tutorial is now on the code's wiki https://gitlab.epfl.ch/ahoffman/gyacomo/-/wikis/home.
A tutorial is now on the code's wiki https://gitlab.epfl.ch/ahoffman/gyacomo/-/wikis/home.
(older guideline)
(shorter guideline)
To compile and run GYACOMO, follow these steps:
To compile and run GYACOMO, follow these steps:
1. Make sure the correct library paths are set in local/dirs.inc. Refer to INSTALATION.txt for instructions on installing the required libraries.
1. Make sure the correct library paths are set in local/dirs.inc. Refer to INSTALATION.txt for instructions on installing the required libraries.
2. Go to the /gyacomo directory and run make to compile the code. The resulting binary will be located in /gyacomo/bin. You can also compile a debug version by running make dbg.
2. Go to the /gyacomo directory and run make to compile the code. The resulting binary will be located in /gyacomo/bin. You can also compile a debug version by running make dbg.
3. The file fort.90 should contain the parameters for a typical CBC. To test the compilation, navigate to the directory where fort.90 is located and run the executable /bin/gyacomo.
3. Once the compilation is done, you can test the executable in the directory /testcases/cyclone_example (see README there).
4. GYACOMO can be run in parallel using MPI by running mpirun -np N ./bin/gyacomo Np Ny Nz, where N = Np x Ny x Nz is the number of processes and Np Ny Nz are the parallel dimensions in Hermite polynomials, binormal direction, and parallel direction, respectively.
4. To stop the simulation without corrupting the output file, create a blank file called "mystop" using $touch mystop in the directory where the simulation is running. The code looks for this file every 100 time steps and the file will be removed once the simulation is stopped.
5. To stop the simulation without corrupting the output file, create a blank file called "mystop" using touch mystop in the directory where the simulation is running. The file will be removed once it is read.
5. It is possible to chain simulations by using the parameter "Job2load" in the fort_XX.90 file. For example, to restart a simulation from the latest 5D state saved in outputs_00.h5, create a new fort.90 file called fort_01.90 and set "Job2load" to 0. Then run GYACOMO with the command ./gyacomo 0 or mpirun -np N ./gyacomo Np Ny Nz 0. This will create a new output file called output_01.h5.
6. It is possible to chain simulations by using the parameter "Job2load" in the fort.90 file. For example, to restart a simulation from the latest 5D state saved in outputs_00.h5, create a new fort.90 file called fort_01.90 and set "Job2load" to 0. Then run GYACOMO with the command ./gyacomo 0 or mpirun -np N ./gyacomo Np Ny Nz 0. This will create a new output file called output_01.h5.
6. To generate plots and gifs using the simulation results, use the script gyacomo/wk/gyacomo_analysis.m and specify the directory where the results are located. Note that this script is not currently a function.
7. To generate plots and gifs using the simulation results, use the script gyacomo/wk/gyacomo_analysis.m and specify the directory where the results are located. Note that this script is not currently a function.
Note: For some collision operators (Sugama and Full Coulomb), you will need to run COSOlver from B.J.Frei to generate the required matrices in the gyacomo/iCa folder before running GYACOMO.
Note: For some collision operators (Sugama and Full Coulomb), you will need to run COSOlver from B.J.Frei to generate the required matrices in the gyacomo/iCa folder before running GYACOMO.
...
@@ -70,33 +72,35 @@ Note: For some collision operators (Sugama and Full Coulomb), you will need to r
...
@@ -70,33 +72,35 @@ Note: For some collision operators (Sugama and Full Coulomb), you will need to r
### 4.x GYACOMO
### 4.x GYACOMO
>4.11 background ExB shear is implemented
>4.11 background ExB shear
>4.1 Miller geometry is added and benchmarked for CBC adiabatic electrons
>4.1 Miller geometry benchmarked
>4.0 new name and opening the code with GNU GPLv3 license
>4.01 Singular value decomposition is availale with LAPACK (used for DLRA experiments)
>4.0 Gyacomo is born and the code is open-source with a GNU GPLv3 license
### 3.x HeLaZ 3D (flux tube s-alpha)
### 3.x HeLaZ 3D (flux tube s-alpha)
>3.9 HeLaZ can now evolve electromagnetic fluctuations by solving Ampere equations (benchmarked linearly)
>3.9 Perpendicular electromagnetic fluctuations by solving Ampere equations (benchmarked linearly)
>3.8 HeLaZ has been benchmarked for CBC with GENE for various gradients values (see Dimits_fig3.m)
>3.8 Benchmarked for CBC against GENE for various gradients values (see Dimits_fig3.m)
>3.7 The frequency plane has been transposed from positive kx to positive ky for easier implementation of shear. Also added 3D zpinch geometry
>3.7 The frequency plane has been transposed from positive kx to positive ky for easier implementation of shear. Also added 3D Z-pinch geometry
>3.6 HeLaZ is now parallelized in p, kx and z and benchmarked for each parallel options with gbms (new molix) for linear fluxtube shearless.
>3.6 MPI 3D parallelization in p, kx and z and benchmarked for each parallel options with gbms (new molix) for linear fluxtube shearless.
>3.5 Staggered grid for parallel odd/even coupling
>3.5 Staggered grid for parallel odd/even coupling
>3.4 HeLaZ can run with adiabatic electrons now!
>3.4 Adiabatic electrons
>3.3 HeLaZ 3D has been benchmarked in fluxtube salphaB geometry linear run with molix (B.J.Frei) code and works now for shear = 0 with periodic z BC
>3.3 Benchmarked in fluxtube s-alpha geometry linear run with molix (B.J.Frei) code and works now for shear = 0 with periodic z BC
>3.2 Stopping file procedure like in GBS is added
>3.2 Stopping file procedure like in GBS is added
>3.1 Implementation of mirror force
>3.1 Implementation of mirror force
>3.0 HeLaZ is now 3D and works like HeLaZ 2D if Nz = 1, the axis were renamed from r and z to x,y and z. Now the parallel direction is ez. All arrays have been extended, diagnostics and analysis too. The linear coefficients are now precomputed with geometry routines.
>3.0 3D version and works as the 2D version if Nz = 1, the coordinates were renamed from (r,z) to (x,y,z). Now the parallel direction is ez.
### 2.x 2D Zpinch MPI parallel version
### 2.x 2D Zpinch MPI parallel version
...
@@ -106,15 +110,15 @@ Note: For some collision operators (Sugama and Full Coulomb), you will need to r
...
@@ -106,15 +110,15 @@ Note: For some collision operators (Sugama and Full Coulomb), you will need to r
>2.5 GK cosolver collision implementation
>2.5 GK cosolver collision implementation
>2.4 2D cartesian parallel (along p and kr)
>2.4 MPI 2D cartesian parallel (along p and kr)
>2.3 GK Dougherty operator
>2.3 GK Dougherty operator
>2.2 Allow restart with different P,J values (results are not concluents)
>2.2 Allow restart with different P,J values
>2.1 First compilable parallel version (1D parallel along kr)
>2.1 First compilable parallel version (1D parallel along kr)
### 1.x Implementation of the non linear Poisson brackets term
### 1.x Implementation of the non linear Poisson bracket term
>1.4 Quantitative study with stationary average particle flux \Gamma_\infty
>1.4 Quantitative study with stationary average particle flux \Gamma_\infty