From 6f177ca30ac8580d6244aad5a9ee50aa9dda151e Mon Sep 17 00:00:00 2001
From: Antoine Hoffmann <antoine.hoffmann@epfl.ch>
Date: Tue, 25 Oct 2022 09:55:06 +0200
Subject: [PATCH] upgrade README

---
 README.md | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/README.md b/README.md
index 55f0f197..4c6c1aa4 100644
--- a/README.md
+++ b/README.md
@@ -19,18 +19,17 @@ To compile it check INSTALLATION.txt
 
 How to run it
 
-1. Be sure to have correct library paths in local/dirs.inc for the different libraries
-2. Compile from /gyacomo using make, the binary will be located in /gyacomo/bin
-4. You can run a typical CBC to test the compilation using the basic fort.90 parameter file,
-   just type ./bin/gyacomo
-5. It is possible to run it in parallel (MPI) as mpirun -np N ./bin/gyacomo Np Ny Nz
-   where N=Np x Ny x Nz is the number of processes and Np Ny Nz are the parallel dimensions in Hermite polynomials, binormal direction and parallel direction, respectively
-6. You can stop your simulation without breaking output file by creating a blank file call "mystop"
-   in the directory where the simulation is running. (the file will be removed once read)
-7. You can obtain various plots and gifs using gyacomo/wk/header_3D_results.m once the simulation is done.
+1. Be sure to have correct library paths in local/dirs.inc for the different libraries, see INSTALATION.txt for a tutorial to install the required libraries.
+2. Compile from /gyacomo using make, the binary will be located in /gyacomo/bin (you can also compile a debug version using make dbg)
+4. The fort.90 file should contain the parameters for a typical CBC to test the compilation. One can run it by calling the executable /bin/gyacomo in the directory where the fort.90 is located.
+5. It is possible to run GYACOMO in parallel using MPI: mpirun -np N ./bin/gyacomo Np Ny Nz where N=Np x Ny x Nz is the number of processes and Np Ny Nz are the parallel dimensions in Hermite polynomials, binormal direction and parallel direction, respectively
+6. You can stop your simulation without corrupting the output file by creating a blank file call "mystop", using e.g. "touch mystop" in the directory where the simulation is running. (the file will be removed once read)
+7. It is also possible to put simulations ID in order to chain them. The parameter Job2load allows you to tell which output file should be read in order to restart a simulation. E.g. I run a first simulation with job2load = -1, it creates a outputs_00.h5 then I create a new fort.90 which I call fort_01.90 where job2load = 0. I run then GYACOMO, indicating that I want it to read the fort_00.90 using 0 as a last argument, i.e. "./gyacomo 0" or "mpirun -np N ./gyacomo Np Ny Nz 0", which will start from the latest 5D state saved in outputs_00.h5. A new output file has also been created, output_01.h5.
+8. You can obtain various plots and gifs using gyacomo/wk/gyacomo_analysis.m once the simulation is done. The directory where the results are located must be given in the scripts. It is not a function (yet...)
 // Comment : For some collision operators (Sugama and Full Coulomb) you have to run COSOlver from B.J.Frei first in order to generate the required matrices in gyacomo/iCa folder. //
 
 # Changelog
+
 4. GYACOMO
   4.1 Miller geometry is added and benchmarked for CBC adiabatic electrons
 
-- 
GitLab