Mailing List picongpu-users@hzdr.de Message #40
From: Huebl, Axel <a.huebl@hzdr.de>
Subject: tbg: overlook scripts before start
Date: Wed, 30 Sep 2015 10:17:33 +0200
To: <picongpu-users@hzdr.de>
Cc: Orban, Christopher <orban.14@osu.edu>
Signed Data (Text SHA1)
Hi Chris,


> It took me a while to figure out where to look for the PBS script that
it produces.

Sorry for that!

tbg creates job files by default in the "destinationPath" (free
parameter in tbg) and is then moved by our script
  src/picongpu/submit/submitAction.sh

to the sub-directory "destinationPath/tbg/"

The name is "submit.start", it lays alongside the "submit.cfg" and the
"submit.tpl" that were used.


> It would be preferable if there was an option to produce a
PBS-compatible script but allow the user to do the final qsub after
looking it over.

Absolutely, just skip the option "-s", which stands for submit,

See:

$ tbg --help

> TBG (template batch generator)
> create a new folder for a batch job and copy in all important files
>
> usage: tbg -c [cfgFile] [-s [submitsystem]] [-t [templateFile]] [-p
> project] [-o "VARNAME1=10 VARNAME2=5"] [-h] destinationPath
>
> -c | --cfg    [file]    - Configuration file to set up batch file.
>                           Default: [cfgFile] via export TBG_CFGFILE
> -s | --submit [command] - Submit command (qsub, "qsub -h", sbatch,
>                                           ...)
>                           Default: [submitsystem] via
>                                    export TBG_SUBMIT
> -t | --tpl    [file]    - Template to create a batch file from.
>                           tbg will use stdin, if no file is specified.
>                           Default: [templateFile] via
>                                    export TBG_TPLFILE
> [...]
> destinationPath         - Directory for simulation output


> when you run PIConGPU do you typically only use 1 MPI process per GPU?

yes, one MPI host rank controls one GPU.
So if you have 4 GPUs per node, you will need 4 MPI processes on it.


Axel

On 29.09.2015 22:00, Orban, Christopher wrote:
> Hi Axel,
>            I finally got some more time to tinker with PIConGPU today and it turned out that the binary I had compiled does successfully work on the ruby cluster at the Ohio Supercomputer Center.
>            I must admidt that using tbg was more complicated than I imagined it to be. It took me a while to figure out where to look for the PBS script that it produces. It would be preferable if there was an option to produce a PBS-compatible script but allow the user to do the final qsub after looking it over.
>            Here is a quick question for you: when you run PIConGPU do you typically only use 1 MPI process per GPU? I got confused the first time I tried running a simulation on one node because I used mpiexec -n 20 on a cluster with 20 CPUs and 1 GPU. Single node operation only worked using 1 CPU (i.e. using mpiexec -n 1). Please let me know if having 1 MPI process per GPU is the standard way of operating the code.
>           thanks,
>           Chris

--

Axel Huebl
Phone +49 351 260 3582
https://www.hzdr.de/crp
Computational Radiation Physics
Laser Particle Acceleration Division
Helmholtz-Zentrum Dresden - Rossendorf e.V.

Bautzner Landstrasse 400, 01328 Dresden
POB 510119, D-01314 Dresden
Vorstand: Prof. Dr.Dr.h.c. R. Sauerbrey
          Prof. Dr.Dr.h.c. P. Joehnk
VR 1693 beim Amtsgericht Dresden

Content Unaltered as verified By:
Huebl, Axel <a.huebl@hzdr.de>
Subscribe (FEED) Subscribe (DIGEST) Subscribe (INDEX) Unsubscribe Mail to Listmaster