diff --git a/README.md b/README.md
index d9eaf4429e5dfb7dafcd210eb8c9e8547f7b5b77..0a8bfaed7acd400663b2ce30422ab9f77b745054 100644
--- a/README.md
+++ b/README.md
@@ -5,43 +5,65 @@
 </figcaption>
 </figure>
 
-GYACOMO (Gyrokinetic Advanced Collision Moment solver, 2021)
+GYACOMO (Gyrokinetic Advanced Collision Moment solver)
 Copyright (C) 2022 EPFL
 
-This program is free software: you can redistribute it and/or modify
-it under the terms of the GNU General Public License as published by
-the Free Software Foundation, either version 3 of the License, or
-(at your option) any later version.
+This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
 
-This program is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-GNU General Public License for more details.
+This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
 
-You should have received a copy of the GNU General Public License
-along with this program.  If not, see <https://www.gnu.org/licenses/>.
+You should have received a copy of the GNU General Public License along with this program.  If not, see <https://www.gnu.org/licenses/>.
 
 Author: Antoine C.D. Hoffmann
 
-# Citing GYACOMO
-If you use GYACOMO in your work, please cite the following paper: 
+Contact: antoine.hoffmann@epfl.ch
+
+##### Citing GYACOMO
+If you use GYACOMO in your work, please cite (at least) one of the following paper: 
 
 Hoffmann, A., Frei, B., & Ricci, P. (2023). Gyrokinetic simulations of plasma turbulence in a Z-pinch using a moment-based approach and advanced collision operators. Journal of Plasma Physics, 89(2), 905890214. doi:10.1017/S0022377823000284
 
+Hoffmann, A., Frei, B., & Ricci, P. (2023). Gyrokinetic moment-based simulations of the
+Dimits shift https://arxiv.org/abs/2308.01016
+
+# What is GYACOMO ?
+
+GYACOMO is the Gyrokinetic Advanced Collision Moment solver which solves the gyrokinetic Boltzmann equation in the delta-f flux-tube limit based on a projection of the velocity distribution function onto a Hermite-Laguerre velocity basis.
+
+It can be coupled with precomputed matrices from the code Cosolver (B.J. Frei) to incorporate advanced collision operators up to the gyro-averaged linearized exact coulomb interaction (GK Landau operator).
+
+This repository contains the solver source code (in /src) but also my personnal post-processing Matlab scripts, which are less documented. I would recommend the user to write their own post-processing scripts based on the H5 files the code outputs.
+
+#### GYACOMO can
+- run in parallel using MPI (mpirun -np N ./path_to_exec Np Ny Nz, where N = Np x Ny x Nz is the number of processes and Np Ny Nz are the parallel dimensions in Hermite polynomials, binormal direction, and parallel direction, respectively).
+- run in single precision (make gfsp).
+- evolve kinetic electrons and ions.
+- use an adiabatic electrons model.
+- include perpendicular magnetic perturbation (Ampere's law).
+- use Z-pinch and s-alpha geometry.
+- use the Miller geometry framework (elongation, triangularity etc.).
+- use various experimental closures for the linear and nonlinear terms.
+- add background ExB shear flow.
+
+#### GYACOMO cannot (yet)
+- include parallel magnetic fluctuations
+- use an adiabatic ion model
+- include finite rhostar effects
+- study stellarator geometries
+
 # How to compile and run GYACOMO
 
-A tutorial is present on the code's wiki https://gitlab.epfl.ch/ahoffman/gyacomo/-/wikis/home.
+A tutorial is now on the code's wiki https://gitlab.epfl.ch/ahoffman/gyacomo/-/wikis/home.
 
-(older guideline)
+(shorter guideline)
 To compile and run GYACOMO, follow these steps:
 
 1. Make sure the correct library paths are set in local/dirs.inc. Refer to INSTALATION.txt for instructions on installing the required libraries.
 2. Go to the /gyacomo directory and run make to compile the code. The resulting binary will be located in /gyacomo/bin. You can also compile a debug version by running make dbg.
-3. The file fort.90 should contain the parameters for a typical CBC. To test the compilation, navigate to the directory where fort.90 is located and run the executable /bin/gyacomo.
-4. GYACOMO can be run in parallel using MPI by running mpirun -np N ./bin/gyacomo Np Ny Nz, where N = Np x Ny x Nz is the number of processes and Np Ny Nz are the parallel dimensions in Hermite polynomials, binormal direction, and parallel direction, respectively.
-5. To stop the simulation without corrupting the output file, create a blank file called "mystop" using touch mystop in the directory where the simulation is running. The file will be removed once it is read.
-6. It is possible to chain simulations by using the parameter "Job2load" in the fort.90 file. For example, to restart a simulation from the latest 5D state saved in outputs_00.h5, create a new fort.90 file called fort_01.90 and set "Job2load" to 0. Then run GYACOMO with the command ./gyacomo 0 or mpirun -np N ./gyacomo Np Ny Nz 0. This will create a new output file called output_01.h5.
-7. To generate plots and gifs using the simulation results, use the script gyacomo/wk/gyacomo_analysis.m and specify the directory where the results are located. Note that this script is not currently a function.
+3. Once the compilation is done, you can test the executable in the directory /testcases/cyclone_example (see README there) or /testcases/zpinch_example (see wiki).
+4. To stop the simulation without corrupting the output file, create a blank file called "mystop" using $touch mystop in the directory where the simulation is running. The code looks for this file every 100 time steps and the file will be removed once the simulation is stopped.
+5. It is possible to chain simulations by using the parameter "Job2load" in the fort_XX.90 file. For example, to restart a simulation from the latest 5D state saved in outputs_00.h5, create a new fort.90 file called fort_01.90 and set "Job2load" to 0. Then run GYACOMO with the command ./gyacomo 0 or mpirun -np N ./gyacomo Np Ny Nz 0. This will create a new output file called output_01.h5.
+6. To generate plots and gifs using the simulation results, use the script gyacomo/wk/gyacomo_analysis.m and specify the directory where the results are located. Note that this script is not currently a function.
 
 Note: For some collision operators (Sugama and Full Coulomb), you will need to run COSOlver from B.J.Frei to generate the required matrices in the gyacomo/iCa folder before running GYACOMO.
 
@@ -51,31 +73,35 @@ Note: For some collision operators (Sugama and Full Coulomb), you will need to r
 
 ### 4.x GYACOMO
 
->4.1 Miller geometry is added and benchmarked for CBC adiabatic electrons
+>4.11 background ExB shear
+
+>4.1 Miller geometry benchmarked
+
+>4.01 Singular value decomposition is availale with LAPACK (used for DLRA experiments)
 
->4.0 new name and opening the code with GNU GPLv3 license
+>4.0 Gyacomo is born and the code is open-source with a GNU GPLv3 license
 
 ### 3.x HeLaZ 3D (flux tube s-alpha)
 
->3.9 HeLaZ can now evolve electromagnetic fluctuations by solving Ampere equations (benchmarked linearly)
+>3.9 Perpendicular electromagnetic fluctuations by solving Ampere equations (benchmarked linearly)
 
->3.8 HeLaZ has been benchmarked for CBC with GENE for various gradients values (see Dimits_fig3.m)
+>3.8 Benchmarked for CBC against GENE for various gradients values (see Dimits_fig3.m)
 
->3.7 The frequency plane has been transposed from positive kx to positive ky for easier implementation of shear. Also added 3D zpinch geometry
+>3.7 The frequency plane has been transposed from positive kx to positive ky for easier implementation of shear. Also added 3D Z-pinch geometry
 
->3.6 HeLaZ is now parallelized in p, kx and z and benchmarked for each parallel options with gbms (new molix) for linear fluxtube shearless.
+>3.6 MPI 3D parallelization in p, kx and z and benchmarked for each parallel options with gbms (new molix) for linear fluxtube shearless.
 
 >3.5 Staggered grid for parallel odd/even coupling
 
->3.4 HeLaZ can run with adiabatic electrons now!
+>3.4 Adiabatic electrons
 
->3.3 HeLaZ 3D has been benchmarked in fluxtube salphaB geometry linear run with molix (B.J.Frei) code and works now for shear = 0 with periodic z BC
+>3.3 Benchmarked in fluxtube s-alpha geometry linear run with molix (B.J.Frei) code and works now for shear = 0 with periodic z BC
 
 >3.2 Stopping file procedure like in GBS is added
 
 >3.1 Implementation of mirror force
 
->3.0 HeLaZ is now 3D and works like HeLaZ 2D if Nz = 1, the axis were renamed from r and z  to x,y and z. Now the parallel direction is ez. All arrays have been extended, diagnostics and analysis too. The linear coefficients are now precomputed with geometry routines.
+>3.0 3D version and works as the 2D version if Nz = 1, the coordinates were renamed from (r,z)  to (x,y,z). Now the parallel direction is ez.
 
 ### 2.x 2D Zpinch MPI parallel version
 
@@ -85,15 +111,15 @@ Note: For some collision operators (Sugama and Full Coulomb), you will need to r
 
 >2.5 GK cosolver collision implementation
 
->2.4 2D cartesian parallel (along p and kr)
+>2.4 MPI 2D cartesian parallel (along p and kr)
 
 >2.3 GK Dougherty operator
 
->2.2 Allow restart with different P,J values (results are not concluents)
+>2.2 Allow restart with different P,J values
 
 >2.1 First compilable parallel version (1D parallel along kr)
 
-### 1.x Implementation of the non linear Poisson brackets term
+### 1.x Implementation of the non linear Poisson bracket term
 
 >1.4 Quantitative study with stationary average particle flux \Gamma_\infty
 
diff --git a/testcases/DIII-D_triangularity_fast_nonlinear/README b/testcases/DIII-D_triangularity_fast_nonlinear/README
new file mode 100644
index 0000000000000000000000000000000000000000..85d4b5824909a14171884023ae8264f12c8b59e9
--- /dev/null
+++ b/testcases/DIII-D_triangularity_fast_nonlinear/README
@@ -0,0 +1,6 @@
+This is an example to run an edge DIII-D L-mode negative triangularity in the hot electrons limit with the GYACOMO code.
+
+You can run it sequentially in single precision typing : ./gyacomo23_sp 0
+or in parallel (here double precision) : mpirun -np 6 ./gyacomo23_sp 1 6 1 0
+
+The code has to be compiled before and the executable should be located in the /gyacomo/bin/ folder.
diff --git a/testcases/DIII-D_triangularity_fast_nonlinear/fort_00.90 b/testcases/DIII-D_triangularity_fast_nonlinear/fort_00.90
new file mode 100644
index 0000000000000000000000000000000000000000..1879e0c7c7dbd8fa41f016b23b4b5d905fb7f422
--- /dev/null
+++ b/testcases/DIII-D_triangularity_fast_nonlinear/fort_00.90
@@ -0,0 +1,103 @@
+&BASIC
+  nrun       = 100000000
+  dt         = 0.025
+  tmax       = 300
+  maxruntime = 43000
+  job2load   = -1
+/
+&GRID
+  pmax  = 2
+  jmax  = 1
+  Nx    = 128
+  Lx    = 300
+  Ny    = 32
+  Ly    = 150
+  Nz    = 32
+  SG    = .false.
+  Nexc  = 0
+/
+&GEOMETRY
+  geom     = 'miller'
+  q0       = 4.8
+  shear    = 2.55
+  eps      = 0.3
+  kappa    = 1.57
+  s_kappa  = 0.48
+  delta    =-0.40
+  s_delta  =-0.25
+  zeta     = 0.00
+  s_zeta   = 0.00
+  parallel_bc = 'dirichlet'
+  shift_y = 0
+  Npol    = 1
+/
+&DIAGNOSTICS
+  dtsave_0d = 0.5
+  dtsave_1d = -1
+  dtsave_2d = -1
+  dtsave_3d = 2.0
+  dtsave_5d = 20
+  write_doubleprecision = .false.
+  write_gamma = .true.
+  write_hf    = .true.
+  write_phi   = .true.
+  write_Na00  = .true.
+  write_Napj  = .true.
+  write_dens  = .true.
+  write_temp  = .true.
+/
+&MODEL
+LINEARITY = 'nonlinear'
+RM_LD_T_EQ= .false.
+  Na      = 1
+  ADIAB_E = .t.
+  mu_x    = 1.0
+  mu_y    = 1.0
+  N_HD    = 4
+  mu_z    = 5.0
+  HYP_V   = 'hypcoll'
+  mu_p    = 0
+  mu_j    = 0
+  nu      = 1.0
+  beta    = 0.00
+  ExBrate = 0
+  MHD_PD  = .true.
+/
+&CLOSURE
+  hierarchy_closure='truncation'
+  dmax             =-1
+  nonlinear_closure='truncation'
+  nmax             =-1
+/
+&SPECIES
+  name_  = 'ions' 
+  tau_   = 1
+  sigma_ = 1
+  q_     = 1.0
+  K_N_   = 00!2.79
+  K_T_   = 5.15
+/
+&SPECIES
+  name_  = 'electrons' 
+  tau_   = 1
+  sigma_ = 0.023
+  q_     = -1
+  K_N_   = 2.79
+  K_T_   = 17.3
+/
+&COLLISION
+  collision_model = 'DG'
+  GK_CO      = .true.
+  INTERSPECIES    = .true.
+  mat_file        = 'gk_sugama_P_20_J_10_N_150_kpm_8.0.h5'
+  collision_kcut  = 1
+/
+&INITIAL
+  INIT_OPT      = 'blob'
+  init_background  = 0
+  init_noiselvl = 1e-05
+  iseed         = 42
+/
+&TIME_INTEGRATION
+  numerical_scheme = 'RK4'
+/
diff --git a/testcases/cyclone_example/README b/testcases/cyclone_example/README
new file mode 100644
index 0000000000000000000000000000000000000000..b466d703ce7cfcd4e868c36e94bb7f9db5bc3b7f
--- /dev/null
+++ b/testcases/cyclone_example/README
@@ -0,0 +1,6 @@
+This is an example to run a cyclone base case (see Dimits et al. 2000, Hoffmann et al. 2023) with the GYACOMO code.
+
+you can run it sequentially in single precision typing : ./gyacomo23_sp 0
+or in parallel (here double precision) : mpirun -np 6 ./gyacomo23_sp 1 6 1 0
+
+The code has to be compiled before and the executable should be located in the /gyacomo/bin/ folder.