Exercice 5 : Introduction to N-body Cosmological simulations (up to 3 weeks)

(This exercice is inspired from a hands-on session given by Volker Springel in 2013)

In this practical work, we will simulate the evolution of a portion of a \Lambda\rm{CDM} universe, from very high redshift up to the present time. The work is divided in the following steps:

1. Generating the initial conditions

Initial conditions for cosmological simulations are based on the power spectrum of visible matter in our large scale Universe. The latter is derived from direct observation of large galaxy surveys, weak lensing surveys, Lyman alpha absorption but also from the cosmic microwave background.

To generate initial conditions for cosmological simulations, we usually proceeds through the following steps:

  1. Setup a Cartesian grid with dimensions corresponding to the volume of the simulated universe. Periodic boundaries will be apply.
  2. Compute the perturbation field using the inverse fourier transform of the power spectrum.
  3. Compute the perturbation of the velocities, following the Zeldovitch approximation.
  4. Apply these perturbations to the Cartesian grid.

Different codes have been developed to generate initial conditions. here, we will use a parallel code called N-GenIC developed by Volker Springel. In order to use this code, you need to :

  1. Download the IC-code:

    wget http://www.mpa-garching.mpg.de/gadget/n-genic.tar.gz
    tar -xzf n-genic.tar.gz
    cd N-GenIC/
    
  2. Edit the Makefile. Add the following lines

    ifeq ($(SYSTYPE),"lastro_TP4")
    CC = mpicc
    OPTIMIZE = -Wall
    GSL_INCL =
    GSL_LIBS =
    FFTW_INCL= -I /dios/shared/apps/fftw/2.1.5/include/
    FFTW_LIBS= -L /dios/shared/apps/fftw/2.1.5/lib/
    MPICHLIB =
    endif
    

    that defines where the path to some needed libraries. In order to use these definitions, change:

    SYSTYPE="OPA-Cluster64-Gnu"
    

    to:

    SYSTYPE="lastro_TP4"
    
  1. Compile:

    make
    

    This should compile the code and create a binary called N-GenIC.

In order to execute N-GenIC we need to provide it with some parameters. Example of a file containing these parameters is given in “ics.param”. Usefull parameters are:

parameter name value description
Box 150000.0 Periodic box size of simulation
Omega 0.3 Total matter density (at z=0)
OmegaLambda 0.7 Cosmological constant (at z=0)
HubbleParam 0.7 Hubble parameter
Sigma8 0.9 power spectrum normalization
Redshift 63 Starting redshift

HubbleParam corresponds to the little h, which is linked to the Hubble constant through:

H_0 = 100\cdot h \frac{\rm{km}}{\rm{s}\cdot\rm{Mpc}} =  3.2407789\,10^{-18}\cdot h \frac{1}{\rm{s}}

Units are defined by the following parameters:

parameter name value description
UnitLength_in_cm 3.085678e21 defines length unit of output (in cm/h)
UnitMass_in_g 1.989e43 defines mass unit of output (in g/h)
UnitVelocity_in_cm_per_s 1e5 defines velocity unit of output (in cm/s)

QUESTIONS:

  • What is the length unit in kpc ?
  • What is the mass unit in solar masses ?
  • What is the time unit in second ?
  • What is the value of the gravitational constant in these units ?
  • Compute the value of the Hubble constant in this system of unit.
  • What is the time unit in Hubble times ?
  • What is the value of the critical density of the universe in these units ?

For the sake of simplicity, edit this file and change:

NumFilesWrittenInParallel 2

to:

NumFilesWrittenInParallel 1

Other parameters may be kept unchanged. To run the code, simply type:

mkdir ICs
./N-GenIC ics.param

You should end up with a file called ICs/ics.

At this stage, it is possible to check your initial conditions, by using the gwin program. Simply type:

gwin -t gadget ICs/ics

and then press the Auto Obs button and the Out one. Doubble click right to refresh the screen.

2. Run the simulation

In the first part, we have setup initial conditions, where positions, velocities and masses of N particles have been defined. The second step is to let the system evolve under the gravitational forces. We will hereafter neglect all other type of forces, like for examples pressure forces. We thus assume that the system is composed of dark matter only (assuming that it is only sensitive to the gravity).

Basically, what we need to do is to integrate N equations of motion, where N correspond to the number of particles. At every time, the forces acting on each particles depends on the position of all other. On top of that, the expansion of the universe must be taken into account. By using a comobile reference frame, it is possible to hide the expansion of the universe in the equation of motion.

In practice, computing the forces on N particles due to N particles is far from being trivial, as the problem scales as N^2. We will simply use here an open source code called Gadget-2 designed to solve this kind of problems. A full documentation of``Gadget-2`` may be found here: http://www.mpa-garching.mpg.de/gadget/users-guide.pdf

In order to use Gadget-2, do the following:

  1. From your working directory, download and extract the Gadget-2 sources:

    wget http://www.mpa-garching.mpg.de/gadget/gadget-2.0.7.tar.gz
    tar -xzf gadget-2.0.7.tar.gz
    cd Gadget-2.0.7/Gadget2/
    
  2. Edit the Makefile:

    • increase the size of the mesh size needed by the PM gravity solver:

      OPT   +=  -DPMGRID=256
      

      this will fit to our 128^3 particle set-up.

    • do not use the hdf5 format:

      #OPT   +=  -DHAVE_HDF5
      
    • compute the potential energy:

      OPT   +=  -DCOMPUTE_POTENTIAL_ENERGY
      
    • write potential and accelerations in the snapshot files:

      OPT   +=  -DOUTPUTPOTENTIAL
      OPT   +=  -DOUTPUTACCELERATION
      
    • as previously, add a block that defines the path to some needed libraries:

      ifeq ($(SYSTYPE),"lastro_TP4")
      CC = mpicc
      OPTIMIZE = -O3 -Wall
      GSL_INCL =
      GSL_LIBS =
      FFTW_INCL= -I /dios/shared/apps/fftw/2.1.5/include/
      FFTW_LIBS= -L /dios/shared/apps/fftw/2.1.5/lib/
      MPICHLIB =
      endif
      

      In order to use this block, change:

      SYSTYPE="MPA"
      

      to:

      SYSTYPE="lastro_TP4"
      
  3. Compile:

    make
    

    This should compile the code and create a binary called Gadget2.

Now, prepare the directory where you will run the simulation.

  1. First, you need to have a directory in the /SCRATCH of regor2. Ask the system manager in order to get one.

  2. Then, assuming that your user name is lastro_etu, create a directory that will contain the simulation:

    mkdir /SCRATCH/lastro_etu/lcdm256-001
    cd /SCRATCH/lastro_etu/lcdm256-001
    
  3. Copy the sources that you just compiled:

    rsync -av  ~/TP4/Gadget-2.0.7/Gadget2/ src
    
  4. Copy the initial conditions:

    rsync -av  ~/TP4/N-GenIC/ICs .
    
  5. Create the directory that will store output files:

    mkdir snap
    
  6. Create a file that will contains the time (in scaling factor) when you want to store snapshots, i.e, the state of the system:

    gmktimesteps -p 5 > outputs.txt
    
  7. Copy an example of parameter file:

    rsync -av  ~/TP4/Gadget-2.0.7/Gadget2/parameterfiles/lcdm_gas.param params
    

    You need to edit this file to setup peculiar parameters:

    InitCondFile               ICs/ics
    OutputDir                  snap/
    OutputListFilename         outputs.txt
    

    Set the beginning and end of the simulation to a scaling factor corresponding to a redshift of 63 and 0 (present day):

    TimeBegin                  give a value here
    TimeMax                    give a value here
    

    Verify the cosmological parameters, as well as the size of the box. These must be similar to the ones used in N-GenIC:

    Omega0                0.3
    OmegaLambda           0.7
    OmegaBaryon           0.0
    HubbleParam           0.7
    BoxSize               150000.0
    

    Set a gravitational softening length of order \sim 1/35 the mean particle spacing, fixed in comoving coordinates:

    SofteningHalo             give a value here
    SofteningHaloMaxPhys      give a value here
    

And finally, run the simulation. As we will run it in parallel, using the queue system of regor2 (see the regor2-label documentation), you first needs to create a batch script, lets call it start.slurm

#!/bin/bash
#SBATCH

mpirun -mca btl ^openib src/Gadget2 params

Then, to submit the job, simply type:

sbatch -p r12 -J mysimu -n 8 start.slurm

This will run the simulation using 8 cpus.

To check that your simulatio is running, type:

squeue

and you should see your job listed. Using 8 cpus, the simulation should takes between one and two hours. Output files will be generated in the snap directory. The file slurm-xxxxxx.out contains the standard io outputs. You can tail it to follow the progress of the simulation:

tail -n 100 slurm-xxxxxx.out

3. Analyzing the results

Let’s take a look at it

It is always interesting to have a look at a simulation, as human eye is very good in spotting suspicious artefacts. Using some tools provided with the pNbody package, we will herafter see how to create a movie showing the evolution of the simulation.

In the directory of the simulation, create a directory called film, and move into it. There, create a file called filmparam.py, with the following content:

nh = 1                  # number of horizontal frame
nw = 1                  # number of vertical frame
# size of subfilms
width = 512
height = width
# size of the film
numByte = width * nw
numLine = height * nh
# init parameters
param = initparams(nh,nw)

# 1 vue totale
param[1]['ftype'] = 'gadget'
param[1]['size'] = (160000/2.,160000/2.)
param[1]['obs'] = None
param[1]['xp'] = None
param[1]['x0'] = None
param[1]['view'] = 'xy'
param[1]['frsp'] = 0
param[1]['mode'] = 'm'
param[1]['time'] = "nb.atime"
param[1]['exec'] = "nb.translate([-nb.boxsize/2.,-nb.boxsize/2.,-nb.boxsize/2.])"

In short, here, we define a movie containing only one frame (nh = 1,nw = 1) with a physical size of (160000/2.,160000/2.), where particles are projected on the xy plane. However, before the projection, we need to center the box applying a translation on the full system : nb.translate([-nb.boxsize/2.,-nb.boxsize/2.,-nb.boxsize/2.]).

The movie is then created using the command mkgmov applied on all snapshots created from your simulation:

mkgmov -p filmparam.py film.gmv ../snap/snapshot_*

Once the movie film.gmv is finished, you can whatch it using the special tool gmov:

gmov film.gmv

If the simulation was successful, you should see the growth of large scale structures. At z=0 (a=1) the universe should be similar to the one displayed on Fig. 2.

../../_images/lcdm1.jpg

Fig.2 : Surface density of a simulated universe at z=0 (a=1).

Extracting haloes

In the previous section, we have seen that from the initially nearly homogeneous universe structures emerges under the action of the gravity law. At z=0, a large number of cluster (or haloes) seems to be present. In order to characterize the state of the universe at z=0, it is common to compute the so called “halo mass function”, i.e., the number of haloes withing a given range of mass.

However, to do so, it is first necessary to extract those haloes from the final snapshot. This part is far from being trivial and different algorithm exists. In this work, we will use the classical Friends of Friends (FoF) algorithm, implemented here in a code called FoF_Special kindly provided by volker Springel.

In short, the idea of a FoF algorithm is the following: each particle constitute a group of particles made of neighboring particles (friends) lying within a distance smaller than a given linking length (a fixed parameter). If two particles have one or more common neighboring particles, the group is merged. At the end of the procedure, a number of groups exists. The algorithm guarantee that in a group all particles are friends of friends of friends, etc. of any given particle and particles outside this groups have no friends in the group, .i.e., no particles in the group is situated at a distance smaller than the linking length with respect to the particle outside.

To use FoF_Special first download it and extract it in the same directory where you extracted gadget-2.0.7.tar.gz and n-genic.tar.gz:

wget http://lastro.epfl.ch/misc/TP4/doc/_downloads/fof.tar.gz
tar -xzf fof.tar.gz
cd FoF_Special/

Before compiling, we need to setup the linking length. The latter is hardcoded in the file main.c, and its value is given in term of mean particule separation:

double LinkLength = 0.2;

Then, compile the code, simply by typing:

make

You should end up with a binary called FoF_Special. To apply FoF_Special on the snapshots of our simulation we will proceed like follow:

  1. In the directory where you run the simulation, create a folder called fof and copy FoF_Special that you just created.

  2. Then, run FoF_Special on the last snapshot, that should correspond to z=0. You need to provide FoF_Special with three arguments: the output directory, the basename of the snapshot files and the number of the output on which you want to extract haloes. This yields:

    ./FoF_Special . ../snap/snapshot 32
    

    Three directories are created: groups_catalogue, groups_indexlist, groups_particles. Each contains a file labeled with the snapshot number. Files in the directory groups_catalogue contains informations on the groups, while groups_particles contains the positions of particles in groups. We will not use the files in the directory groups_indexlist.

In order to use the output files, we provide students with a small python library that will help the reading of these files. The module may be downloaded here.

The dark haloes mass function

One common way to characterize the structure of an universe, is to measure the statistical properties of its haloes. The simplest statistic we can perform is to count the number of haloes found for a given mass interval, normalized to the volume of the universe that contains those structures. This function is usually called the “halo mass function”.

Exercices

  • Write a script that plots the so called “halo mass function” and display it in log scale.

  • Using the following fitting formula:

    \frac{dN}{dM} \sim \left( \frac{M_{\rm{halo}}}{m_0} \right)^n

    measure the slope n of the halo mass function. How does the slope varies as a function of the redshift ? What can you conclude concerning the way galaxies builds up ?

The dark haloes structures

According to the \Lambda\rm{CDM} paradigm, galaxies forms inside dark haloes. Studying the structures of those haloes allows us to make a direct link between the predictions of numerical simulations and observations. Thus, once haloes have been extracted, it is interesting to compute properties like density profiles or rotation curves.

Exercices

  • Write a code that enables you to extract and center the 50 most massive haloes.

    hint: in order to avoid centering problems, center the system on the minimum of the potential of the halo. For each particles contained in the halo, the potential is given by:

    nb.pot
    
  • For each extracted halo, compute the value of r_{200} and M_{200}. r_{200} sometimes called the virial radius and M_{200} the virial mass and is defined as the mass enclosed in a sphere with mean density equal to 200 times the critical density of the universe (\rho_{\rm{c}}):

    \frac{M_{200}}{4/3 \pi r_{200}^3} = 200\,\rho_{\rm{c}}

  • Plot the mass density profile of these haloes normalised to the critical density of the universe, between the softening length and the virial radius. Try to see if we can fit the density profile with an NFW profile.

  • Using average quantities, compute the corresponding circular velocity. Which halo corresponds best to the MilkyWay ?