Mailing List picongpu-users@hzdr.de Message #226
From: Axel Huebl <a.huebl@hzdr.de>
Subject: Re: comparison to EPOCH
Date: Sun, 12 Mar 2017 18:25:37 +0100
To: <picongpu-users@hzdr.de>
Dear Andrei,

thank you for your question.

I will try to give you an overview, since comparing PIC codes can be
done on a plethora of aspects, feel free to ask for details anytime.

EPOCH and PIConGPU are both open-source (GPL licensed), community
driven, explicit particle-in-cell codes. Both approximate/solve the
Vlasov equation in via the same particle-mesh method, fields are in
both cases discretized in space and time on a regular grid ("cells") and
phase space is approximated with discrete (weighted) particles with
arbitrary spatial shapes (and discrete momentum).

EPOCH is a long standing Fortran Code, with many features such as
various boundary conditions, etc. that are useful for certain
applications. It is solely CPU parallelized and supports multi-node
parallelization with MPI communication. Initially, it was forked from a
version of the PSC code.

PIConGPU on the other hand is a complete rewrite in modern C++. Our aim
is to support arbitrary modern hardware, especially GPUs, with both
performance portability between hardware and generic algorithms. We
parallelize both multi-node (currently MPI) and in-node (e.g. CUDA),
achieving usually a 10x higher FLOP/s result per node then conventional
PIC codes. We are not in all aspects (yet) as feature-rich as EPOCH, but
are catching up quickly driven by our main applications: laser-particle
acceleration (e-, ions, radiation sources) and general plasma
physics/laser-matter modeling.

Without going into details of what is implemented already, let me refer
you to our README and resources:

  GitHub README: (please follow the links within)

https://github.com/ComputationalRadiationPhysics/picongpu#picongpu---a-many-gpgpu-pic-code

  Wiki:
    https://github.com/ComputationalRadiationPhysics/picongpu/wiki

  New Manual Pages (currently getting started):
    https://picongpu.readthedocs.io/en/dev/


An other aspect is our development circle. We are carrying our full
development including new features, verification, benchmarking, testing
and project planning out in public on our GitHub page. You don't need to
register to get the newest version of the code nor is it locked from
your without login. We encourage people to make the code "their own" and
add what they need, especially when they are interested to contribute
their improvements (models, workflows) back to our "mainline" where we
invest time to review and improve your work in an open process.

We are also eager to improve not only our small niche of
particle-in-cell methods but to think bigger. We are regularly
contributing back to other open source projects (CMake, boost,
compilers, I/O libraries, post-processing frameworks, ...), develop and
establish new methods for PByte-Scale I/O and synthetic (on-the-fly)
diagnostics and upcoming hardware generations on the TOP10 of the world.
With that, we are reportedly the fastest PIC code in the world since we
went open source in 2013.

In line with that, we are driving open standards for data exchange
across communities, e.g. between experiments and various modeling codes
by efforts such as openPMD (see http://openPMD.org and
github.com/openPMD for more details).

To summarize, it's likely that a setup that can be modeled with EPOCH
can be modeled with PIConGPU and then computed on GPUs in a fraction of
the time. Nevertheless, in reality the specific application you are
modeling might require specific techniques, let's say thermal boundary
conditions, that are not (yet) implemented in PIConGPU. Feel free to
check our documentation (above) to see if your requirements are already
met and if not - let's discuss if we can add them together.


Best regards,
Axel

On 12.03.2017 14:15, Andrei Ciprian Berceanu wrote:
> Hi everyone,
>
> As a potential new user, I would like to know how PIConGPU compares to
> existing PIC codes, in particular to EPOCH
> (http://www.ccpp.ac.uk/epoch/epoch_user.pdf).
>
> Thank you,
>
> Andrei Berceanu
>
>
> #############################################################
> This message is sent to you because you are subscribed to
>  the mailing list <picongpu-users@hzdr.de>.
> To unsubscribe, E-mail to: <picongpu-users-off@hzdr.de>
> To switch to the DIGEST mode, E-mail to <picongpu-users-digest@hzdr.de>
> To switch to the INDEX mode, E-mail to <picongpu-users-index@hzdr.de>
> Send administrative queries to  <picongpu-users-request@hzdr.de>
>

--

Axel Huebl
Phone +49 351 260 3582
https://www.hzdr.de/crp
Computational Radiation Physics
Laser Particle Acceleration Division
Helmholtz-Zentrum Dresden - Rossendorf e.V.

Bautzner Landstrasse 400, 01328 Dresden
POB 510119, D-01314 Dresden
Vorstand: Prof. Dr.Dr.h.c. R. Sauerbrey
          Prof. Dr.Dr.h.c. P. Joehnk
VR 1693 beim Amtsgericht Dresden
Subscribe (FEED) Subscribe (DIGEST) Subscribe (INDEX) Unsubscribe Mail to Listmaster