# Exercice 5 : Introduction to N-body Cosmological simulations (up to 3 weeks)¶

*(This exercice is inspired from a hands-on session given by Volker Springel in 2013)*

In this practical work, we will simulate the evolution of a portion of a universe, from very high redshift up to the present time. The work is divided in the following steps:

## 1. Generating the initial conditions¶

Initial conditions for cosmological simulations are based on the power spectrum of visible matter in our large scale Universe. The latter is derived from direct observation of large galaxy surveys, weak lensing surveys, Lyman alpha absorption but also from the cosmic microwave background.

To generate initial conditions for cosmological simulations, we usually proceeds through the following steps:

- Setup a Cartesian grid with dimensions corresponding to the volume of the simulated universe. Periodic boundaries will be apply.
- Compute the perturbation field using the inverse fourier transform of the power spectrum.
- Compute the perturbation of the velocities, following the Zeldovitch approximation.
- Apply these perturbations to the Cartesian grid.

Different codes have been developed to generate initial conditions. here, we will use a parallel code called
`N-GenIC`

developed by Volker Springel.
In order to use this code, you need to :

Download the IC-code:

wget http://www.mpa-garching.mpg.de/gadget/n-genic.tar.gz tar -xzf n-genic.tar.gz cd N-GenIC/Edit the Makefile. Add the following lines

ifeq ($(SYSTYPE),"lastro_TP4") CC = mpicc OPTIMIZE = -Wall GSL_INCL = GSL_LIBS = FFTW_INCL= -I /dios/shared/apps/fftw/2.1.5/include/ FFTW_LIBS= -L /dios/shared/apps/fftw/2.1.5/lib/ MPICHLIB = endifthat defines where the path to some needed libraries. In order to use these definitions, change:

SYSTYPE="OPA-Cluster64-Gnu"to:

SYSTYPE="lastro_TP4"

Compile:

`make`

This should compile the code and create a binary called

`N-GenIC`

.

In order to execute `N-GenIC`

we need to provide it with some parameters. Example of
a file containing these parameters is given in “ics.param”. Usefull parameters are:

parameter name | value | description |
---|---|---|

`Box` |
150000.0 | Periodic box size of simulation |

`Omega` |
0.3 | Total matter density (at z=0) |

`OmegaLambda` |
0.7 | Cosmological constant (at z=0) |

`HubbleParam` |
0.7 | Hubble parameter |

`Sigma8` |
0.9 | power spectrum normalization |

`Redshift` |
63 | Starting redshift |

`HubbleParam`

corresponds to the little , which is linked to the Hubble constant through:

Units are defined by the following parameters:

parameter name | value | description |
---|---|---|

`UnitLength_in_cm` |
3.085678e21 | defines length unit of output (in cm/h) |

`UnitMass_in_g` |
1.989e43 | defines mass unit of output (in g/h) |

`UnitVelocity_in_cm_per_s` |
1e5 | defines velocity unit of output (in cm/s) |

QUESTIONS:

- What is the length unit in kpc ?
- What is the mass unit in solar masses ?
- What is the time unit in second ?
- What is the value of the gravitational constant in these units ?
- Compute the value of the Hubble constant in this system of unit.
- What is the time unit in Hubble times ?
- What is the value of the critical density of the universe in these units ?

For the sake of simplicity, edit this file and change:

```
NumFilesWrittenInParallel 2
```

to:

```
NumFilesWrittenInParallel 1
```

Other parameters may be kept unchanged. To run the code, simply type:

```
mkdir ICs
./N-GenIC ics.param
```

You should end up with a file called ICs/ics.

At this stage, it is possible to check your initial conditions, by using the gwin program. Simply type:

```
gwin -t gadget ICs/ics
```

and then press the **Auto Obs** button and the **Out** one.
Doubble click right to refresh the screen.

## 2. Run the simulation¶

In the first part, we have setup initial conditions, where positions, velocities and masses of N particles have been defined. The second step is to let the system evolve under the gravitational forces. We will hereafter neglect all other type of forces, like for examples pressure forces. We thus assume that the system is composed of dark matter only (assuming that it is only sensitive to the gravity).

Basically, what we need to do is to integrate equations of motion, where correspond to the number of particles. At every time, the forces acting on each particles depends on the position of all other. On top of that, the expansion of the universe must be taken into account. By using a comobile reference frame, it is possible to hide the expansion of the universe in the equation of motion.

In practice, computing the forces on N particles due to N particles is far from being trivial, as the
problem scales as . We will simply use here an open source code called `Gadget-2`

designed to solve
this kind of problems.
A full documentation of``Gadget-2`` may be found here: http://www.mpa-garching.mpg.de/gadget/users-guide.pdf

In order to use `Gadget-2`

, do the following:

From your working directory, download and extract the Gadget-2 sources:

wget http://www.mpa-garching.mpg.de/gadget/gadget-2.0.7.tar.gz tar -xzf gadget-2.0.7.tar.gz cd Gadget-2.0.7/Gadget2/Edit the Makefile:

increase the size of the mesh size needed by the PM gravity solver:

OPT += -DPMGRID=256this will fit to our 128^3 particle set-up.

do not use the hdf5 format:

`#OPT += -DHAVE_HDF5`

compute the potential energy:

OPT += -DCOMPUTE_POTENTIAL_ENERGYwrite potential and accelerations in the snapshot files:

OPT += -DOUTPUTPOTENTIAL OPT += -DOUTPUTACCELERATIONas previously, add a block that defines the path to some needed libraries:

ifeq ($(SYSTYPE),"lastro_TP4") CC = mpicc OPTIMIZE = -O3 -Wall GSL_INCL = GSL_LIBS = FFTW_INCL= -I /dios/shared/apps/fftw/2.1.5/include/ FFTW_LIBS= -L /dios/shared/apps/fftw/2.1.5/lib/ MPICHLIB = endifIn order to use this block, change:

SYSTYPE="MPA"to:

SYSTYPE="lastro_TP4"Compile:

`make`

This should compile the code and create a binary called

`Gadget2`

.

Now, prepare the directory where you will run the simulation.

First, you need to have a directory in the

`/SCRATCH`

of`regor2`

. Ask the system manager in order to get one.Then, assuming that your user name is

`lastro_etu`

, create a directory that will contain the simulation:mkdir /SCRATCH/lastro_etu/lcdm256-001 cd /SCRATCH/lastro_etu/lcdm256-001Copy the sources that you just compiled:

rsync -av ~/TP4/Gadget-2.0.7/Gadget2/ srcCopy the initial conditions:

rsync -av ~/TP4/N-GenIC/ICs .Create the directory that will store output files:

mkdir snapCreate a file that will contains the time (in scaling factor) when you want to store snapshots, i.e, the state of the system:

gmktimesteps -p 5 > outputs.txtCopy an example of parameter file:

rsync -av ~/TP4/Gadget-2.0.7/Gadget2/parameterfiles/lcdm_gas.param paramsYou need to edit this file to setup peculiar parameters:

InitCondFile ICs/ics OutputDir snap/ OutputListFilename outputs.txtSet the beginning and end of the simulation to a scaling factor corresponding to a redshift of 63 and 0 (present day):

TimeBegin give a value here TimeMax give a value hereVerify the cosmological parameters, as well as the size of the box. These must be similar to the ones used in

`N-GenIC`

:Omega0 0.3 OmegaLambda 0.7 OmegaBaryon 0.0 HubbleParam 0.7 BoxSize 150000.0Set a gravitational softening length of order the mean particle spacing, fixed in comoving coordinates:

SofteningHalo give a value here SofteningHaloMaxPhys give a value here

And finally, run the simulation. As we will run it in parallel, using the queue system of `regor2`

(see the regor2-label documentation), you first needs to create a batch script, lets call it start.slurm

```
#!/bin/bash
#SBATCH
mpirun -mca btl ^openib src/Gadget2 params
```

Then, to submit the job, simply type:

```
sbatch -p r12 -J mysimu -n 8 start.slurm
```

This will run the simulation using 8 cpus.

To check that your simulatio is running, type:

```
squeue
```

and you should see your job listed.
Using 8 cpus, the simulation should takes between one and two hours.
Output files will be generated in the `snap`

directory.
The file `slurm-xxxxxx.out`

contains the standard io outputs.
You can *tail* it to follow the progress of the simulation:

```
tail -n 100 slurm-xxxxxx.out
```

## 3. Analyzing the results¶

### Let’s take a look at it¶

It is always interesting to have a look at a simulation, as human eye is very good in spotting suspicious artefacts. Using some tools provided with the pNbody package, we will herafter see how to create a movie showing the evolution of the simulation.

In the directory of the simulation, create a directory called `film`

, and move into it. There, create a file called `filmparam.py`

,
with the following content:

```
nh = 1 # number of horizontal frame
nw = 1 # number of vertical frame
# size of subfilms
width = 512
height = width
# size of the film
numByte = width * nw
numLine = height * nh
# init parameters
param = initparams(nh,nw)
# 1 vue totale
param[1]['ftype'] = 'gadget'
param[1]['size'] = (160000/2.,160000/2.)
param[1]['obs'] = None
param[1]['xp'] = None
param[1]['x0'] = None
param[1]['view'] = 'xy'
param[1]['frsp'] = 0
param[1]['mode'] = 'm'
param[1]['time'] = "nb.atime"
param[1]['exec'] = "nb.translate([-nb.boxsize/2.,-nb.boxsize/2.,-nb.boxsize/2.])"
```

In short, here, we define a movie containing only one frame (`nh = 1,nw = 1`

) with a physical size of
`(160000/2.,160000/2.)`

, where particles are projected on the `xy`

plane. However, before the projection,
we need to center the box applying a translation on the full system : `nb.translate([-nb.boxsize/2.,-nb.boxsize/2.,-nb.boxsize/2.])`

.

The movie is then created using the command `mkgmov`

applied on all snapshots created from your simulation:

```
mkgmov -p filmparam.py film.gmv ../snap/snapshot_*
```

Once the movie `film.gmv`

is finished, you can whatch it using the special tool `gmov`

:

```
gmov film.gmv
```

If the simulation was successful, you should see the growth of large scale structures. At the universe should be similar to the one displayed on Fig. 2.

### Extracting haloes¶

In the previous section, we have seen that from the initially nearly homogeneous universe structures emerges under the action of the gravity law. At , a large number of cluster (or haloes) seems to be present. In order to characterize the state of the universe at , it is common to compute the so called “halo mass function”, i.e., the number of haloes withing a given range of mass.

However, to do so, it is first necessary to extract those haloes from the final snapshot. This part is far from being trivial and
different algorithm exists. In this work, we will use the classical Friends of Friends (FoF) algorithm, implemented here in a code
called `FoF_Special`

kindly provided by volker Springel.

In short, the idea of a FoF algorithm is the following: each particle constitute a group of particles made of neighboring particles (friends) lying within a distance smaller than a given linking length (a fixed parameter). If two particles have one or more common neighboring particles, the group is merged. At the end of the procedure, a number of groups exists. The algorithm guarantee that in a group all particles are friends of friends of friends, etc. of any given particle and particles outside this groups have no friends in the group, .i.e., no particles in the group is situated at a distance smaller than the linking length with respect to the particle outside.

To use `FoF_Special`

first download it and extract it in the same directory where you extracted `gadget-2.0.7.tar.gz`

and `n-genic.tar.gz`

:

```
wget http://lastro.epfl.ch/misc/TP4/doc/_downloads/fof.tar.gz
tar -xzf fof.tar.gz
cd FoF_Special/
```

Before compiling, we need to setup the linking length. The latter is hardcoded in the file `main.c`

, and its value
is given in term of mean particule separation:

```
double LinkLength = 0.2;
```

Then, compile the code, simply by typing:

```
make
```

You should end up with a binary called `FoF_Special`

.
To apply `FoF_Special`

on the snapshots of our simulation we will proceed like follow:

In the directory where you run the simulation, create a folder called

`fof`

and copy`FoF_Special`

that you just created.Then, run

`FoF_Special`

on the last snapshot, that should correspond to . You need to provide`FoF_Special`

with three arguments: the output directory, the basename of the snapshot files and the number of the output on which you want to extract haloes. This yields:./FoF_Special . ../snap/snapshot 32

Three directories are created:

`groups_catalogue`

,`groups_indexlist`

,`groups_particles`

. Each contains a file labeled with the snapshot number. Files in the directory`groups_catalogue`

contains informations on the groups, while`groups_particles`

contains the positions of particles in groups. We will not use the files in the directory`groups_indexlist`

.

In order to use the output files, we provide students with a small python library that will help the reading of these files.
The module may be downloaded `here`

.

#### The dark haloes mass function¶

One common way to characterize the structure of an universe, is to measure the statistical properties of its haloes. The simplest statistic we can perform is to count the number of haloes found for a given mass interval, normalized to the volume of the universe that contains those structures. This function is usually called the “halo mass function”.

Exercices

Write a script that plots the so called “halo mass function” and display it in log scale.

Using the following fitting formula:

measure the slope of the halo mass function. How does the slope varies as a function of the redshift ? What can you conclude concerning the way galaxies builds up ?

#### The dark haloes structures¶

According to the paradigm, galaxies forms inside dark haloes. Studying the structures of those haloes allows us to make a direct link between the predictions of numerical simulations and observations. Thus, once haloes have been extracted, it is interesting to compute properties like density profiles or rotation curves.

Exercices

Write a code that enables you to extract and center the 50 most massive haloes.

**hint**: in order to avoid centering problems, center the system on the minimum of the potential of the halo. For each particles contained in the halo, the potential is given by:nb.pot

For each extracted halo, compute the value of and . sometimes called the virial radius and the virial mass and is defined as the mass enclosed in a sphere with mean density equal to 200 times the critical density of the universe ():

Plot the mass density profile of these haloes normalised to the critical density of the universe, between the softening length and the virial radius. Try to see if we can fit the density profile with an NFW profile.

Using average quantities, compute the corresponding circular velocity. Which halo corresponds best to the MilkyWay ?