Return-Path: Received: from [149.220.60.110] (account huebl@hzdr.de [149.220.60.110] verified) by hzdr.de (CommuniGate Pro SMTP 6.1.5) with ESMTPSA id 10882963 for picongpu-users@hzdr.de; Mon, 10 Aug 2015 13:37:47 +0200 Message-ID: <55C88D34.7060101@hzdr.de> Date: Mon, 10 Aug 2015 13:38:28 +0200 From: "Huebl, Axel" MIME-Version: 1.0 To: picongpu-users@hzdr.de Subject: Re: picongpu in single machine and single gpu References: In-Reply-To: Content-Type: multipart/signed; protocol="application/pkcs7-signature"; micalg=sha1; boundary="------------ms060808080403040308000902" This is a cryptographically signed message in MIME format. --------------ms060808080403040308000902 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi Manzoor, can you retry the old size again but set the value totalFreeGpuMemory in param/memory.param to 400 and 450 MB? Currently it's 350 MB and we would like to see if we can set a more reasonable default for 2GB // GForce Cards. Thanks! Axel On 09.08.2015 13:09, Axel Huebl wrote: > Great to hear that! Memory usage was just too high then. >=20 > Did you already try the second card? > With that, you can simulate twice the size in nearly the same time ("we= ak scaling"). Or alternatively the same setup in half the time ("strong s= caling"). >=20 > Axel >=20 > On August 9, 2015 9:36:21 AM CEST, "k.manzoorolajdad" wrote: >> I reduce the gride to 64 128 64 and the run comlete. >> Thanks a lot >> =D8=AF=D8=B1 =D8=AA=D8=A7=D8=B1=DB=8C=D8=AE =DB=B8 =D8=A7=D9=88=D8=AA = =DB=B2=DB=B0=DB=B1=DB=B5 =DB=B2=DB=B0:=DB=B5=DB=B8=D8=8C "k.manzoorolajda= d" >> =D9=86=D9=88=D8=B4=D8=AA: >> >>> thanks a lot >>> this is output in terminal: >>> >>> manzoor@manzoor-gpu:~/paramSets/lwfa$ tbg -s bash -c >> submit/0001gpus.cfg >>> -t submit/bash/bash_mpirun.tpl ~/runs/lwfa >>> Running program... >>> tbg/submit.start: line 37: /home/manzoor/picongpu.profile: No such >> file or >>> directory >>> Data for JOB [64751,1] offset 0 >>> >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D JOB MAP =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D >>> >>> Data for node: manzoor-gpu Num slots: 4 Max slots: 0 Num >> procs: 1 >>> Process OMPI jobid: [64751,1] App: 0 Process rank: 0 >>> >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> Data for JOB [64762,1] offset 0 >>> >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D JOB MAP =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D >>> >>> Data for node: manzoor-gpu Num slots: 4 Max slots: 0 Num >> procs: 1 >>> Process OMPI jobid: [64762,1] App: 0 Process rank: 0 >>> >>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>> [1,0]:PIConGPUVerbose PHYSICS(1) | Sliding Window is OFF >>> [1,0]:mem for particles=3D1078 MiB =3D 142315 Frames =3D 3643= 2640 >>> Particles >>> [1,0]:PIConGPUVerbose PHYSICS(1) | max weighting 6955.06 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | Courant c*dt <=3D 1.74147 = ? >> 1 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | omega_pe * dt <=3D 0.1 ? >> 0.0142719 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | y-cells per wavelength: >> 18.0587 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | macro particles per gpu: >> 8388608 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | typical macro particle >>> weighting: 6955.06 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_SPEED 2.99792e+08 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_TIME 8e-17 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_LENGTH 2.39834e-08 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_MASS 6.33563e-27 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_CHARGE 1.11432e-15 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_EFIELD 2.13064e+13 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_BFIELD 71070.4 >>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_ENERGY 5.69418e-10 >>> >>> >>> On Sat, Aug 8, 2015 at 3:44 PM, Axel Huebl wrote: >>> >>>> Hi, >>>> >>>> Can you please attach your cfg file and output? I think your >> example's >>>> memory consumption is too high, causing a hang. >>>> >>>> If you want to speed up the init, use quiet start (lattice-like in >> cell >>>> positioning of particles) and zero initial temperature. >>>> >>>> Two cards, either in the same node or via network/mpi connected will= >> give >>>> you nearly a 2x speedup. SLI is not required, cuda can't use and >> doesn't >>>> need it (it's more a frame buffer/gfx/gaming interconnect). Just >> plug the >>>> second card in and use it with -d ... :) >>>> >>>> Also, make sure your host=3DCPU RAM is at least as much as the sum o= f >> the >>>> RAM of the GPUs in the node (better a bit more). >>>> >>>> Best, >>>> Axel >>>> >>>> On August 8, 2015 12:37:35 PM CEST, "k.manzoorolajdad" < >>>> kazem.manzoor@gmail.com> wrote: >>>>> thanks a lot Mr.Huebl >>>>> the example lwfa is running in my machine with single core with >> single >>>>> gpu >>>>> since 45 hour ago without output. is it normal? >>>>> how long this code run? >>>>> >>>>> if i use two gpu 670 with SLI can i speed up? how much? >>>>> thanks >>>>> manzoor >>>>> >>>>> >>>>> On Thu, Aug 6, 2015 at 7:34 PM, k.manzoorolajdad >>>>> >>>>> wrote: >>>>> >>>>>> thanks >>>>>> i can run first example : >>>>>> >>>>>> ~/paramSets/lwfa$ tbg -s bash -c submit/0001gpus.cfg -t >>>>>> submit/bash/bash_mpirun.tpl ~/runs/lwfa >>>>>> Running program... >>>>>> tbg/submit.start: line 37: /home/manzoor/picongpu.profile: No >> such >>>>> file or >>>>>> directory >>>>>> Data for JOB [64751,1] offset 0 >>>>>> >>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D JOB MAP =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D >>>>>> >>>>>> Data for node: manzoor-gpu Num slots: 4 Max slots: 0 =20 >> Num >>>>> procs: 1 >>>>>> Process OMPI jobid: [64751,1] App: 0 Process rank: 0 >>>>>> >>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>> Data for JOB [64762,1] offset 0 >>>>>> >>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D JOB MAP =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D >>>>>> >>>>>> Data for node: manzoor-gpu Num slots: 4 Max slots: 0 =20 >> Num >>>>> procs: 1 >>>>>> Process OMPI jobid: [64762,1] App: 0 Process rank: 0 >>>>>> >>>>>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | Sliding Window is OFF >>>>>> [1,0]:mem for particles=3D1078 MiB =3D 142315 Frames =3D >> 36432640 >>>>>> Particles >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | max weighting 6955.06 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | Courant c*dt <=3D >> 1.74147 ? >>>>> 1 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | omega_pe * dt <=3D 0.1 = ? >>>>> 0.0142719 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | y-cells per >> wavelength: >>>>> 18.0587 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | macro particles per >> gpu: >>>>> 8388608 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | typical macro particle >>>>>> weighting: 6955.06 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_SPEED 2.99792e+08 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_TIME 8e-17 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_LENGTH >> 2.39834e-08 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_MASS 6.33563e-27 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_CHARGE >> 1.11432e-15 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_EFIELD >> 2.13064e+13 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_BFIELD 71070.4 >>>>>> [1,0]:PIConGPUVerbose PHYSICS(1) | UNIT_ENERGY >> 5.69418e-10 >>>>>> >>>>>> i think my run in single core with single gpu. >>>>>> >>>>>> >>>>>> *can i run multi cpu with single gpu?* >>>>>> thanks a lot >>>>>> manzoor >>>>>> >>>>>> On Thu, Aug 6, 2015 at 4:03 PM, k.manzoorolajdad >>>>> >>>>>> wrote: >>>>>> >>>>>>> thanks Mr.Huebl >>>>>>> >>>>>>> i use ubuntu 14.04 >>>>>>> cuda 6.0 >>>>>>> and install requirements: >>>>>>> >>>>>>> sudo apt-get install build-essential cmake file cmake-curses-gui >>>>>>> libopenmpi-dev zlib1g-dev libboost-program-options-dev >>>>> libboost-regex-dev >>>>>>> libboost-filesystem-dev libboost-system-dev git >>>>>>> >>>>>>> git clone >>>>> https://github.com/ComputationalRadiationPhysics/picongpu.git >>>>>>> $HOME/src/picongpu >>>>>>> >>>>>>> >>>>>>> export PICSRC=3D$HOME/src/picongpu >>>>>>> >>>>>>> *pngwriter* >=3D 0.5.5 >>>>>>> >>>>>>> mkdir -p ~/src ~/build ~/lib >>>>>>> git clone https://github.com/pngwriter/pngwriter.git >>>>> ~/src/pngwriter/ >>>>>>> >>>>>>> cd ~/build >>>>>>> cmake -DCMAKE_INSTALL_PREFIX=3D~/lib/pngwriter ~/src/pngwriter >>>>>>> manzoor@manzoor-gpu:~/build$ make install >>>>>>> Scanning dependencies of target pngwriter_static >>>>>>> [ 16%] Building CXX object >>>>>>> CMakeFiles/pngwriter_static.dir/src/pngwriter.cc.o >>>>>>> Linking CXX static library libpngwriter.a >>>>>>> [ 16%] Built target pngwriter_static >>>>>>> Scanning dependencies of target blackwhite >>>>>>> [ 33%] Building CXX object >>>>> CMakeFiles/blackwhite.dir/tests/blackwhite.cc.o >>>>>>> Linking CXX executable blackwhite >>>>>>> [ 33%] Built target blackwhite >>>>>>> Scanning dependencies of target diamond >>>>>>> [ 50%] Building CXX object >> CMakeFiles/diamond.dir/tests/diamond.cc.o >>>>>>> Linking CXX executable diamond >>>>>>> [ 50%] Built target diamond >>>>>>> Scanning dependencies of target lyapunov >>>>>>> [ 66%] Building CXX object >>>>> CMakeFiles/lyapunov.dir/examples/lyapunov.cc.o >>>>>>> Linking CXX executable lyapunov >>>>>>> [ 66%] Built target lyapunov >>>>>>> Scanning dependencies of target pngtest >>>>>>> [ 83%] Building CXX object >>>>> CMakeFiles/pngtest.dir/examples/pngtest.cc.o >>>>>>> Linking CXX executable pngtest >>>>>>> [ 83%] Built target pngtest >>>>>>> Scanning dependencies of target pngwriter >>>>>>> [100%] Building CXX object >>>>> CMakeFiles/pngwriter.dir/src/pngwriter.cc.o >>>>>>> Linking CXX shared library libpngwriter.so >>>>>>> [100%] Built target pngwriter >>>>>>> Install the project... >>>>>>> -- Install configuration: "" >>>>>>> -- Installing: /home/manzoor/lib/pngwriter/lib/libpngwriter.so >>>>>>> -- Installing: /home/manzoor/lib/pngwriter/lib/libpngwriter.a >>>>>>> -- Installing: /home/manzoor/lib/pngwriter/include/pngwriter.h >>>>>>> >>>>>>> export CUDA_ROOT=3D/usr/local/cuda-6.0/ >>>>>>> export MPI_ROOT=3D/usr/local/ >>>>>>> export PATH=3D$PATH:$HOME/src/picongpu/src/tools/bin >>>>>>> export PNGWRITER_ROOT=3D$HOME/lib/pngwriter >>>>>>> >>>>>>> mkdir -p ~/src ~/build ~/paramSets ~/runs >>>>>>> >>>>>>> ~/src/picongpu/createParameterSet >>>>> ~/src/picongpu/examples/LaserWakefield/ >>>>>>> paramSets/lwfa/ >>>>>>> >>>>>>> cd build/ >>>>>>> >>>>>>> manzoor@manzoor-gpu:~/build$ ~/src/picongpu/configure -a sm_30 >>>>>>> ../paramSets/lwfa >>>>>>> cmake command: cmake -DCUDA_ARCH=3Dsm_20 >>>>>>> -DCMAKE_INSTALL_PREFIX=3D../paramSets/lwfa >>>>>>> -DPIC_EXTENSION_PATH=3D../paramSets/lwfa -DCUDA_ARCH=3Dsm_30 >>>>>>> /home/manzoor/src/picongpu >>>>>>> *CMake Error: The source >> "/home/manzoor/src/picongpu/CMakeLists.txt" >>>>> does >>>>>>> not match the source >> "/home/manzoor/src/pngwriter/CMakeLists.txt" >>>>> used to >>>>>>> generate cache. Re-run cmake with a different source >> directory.* >>>>>>> >>>>>>> >>>>>>> On Thu, Aug 6, 2015 at 2:07 PM, Huebl, Axel >> wrote: >>>>>>> >>>>>>>> Dear Manzoor, >>>>>>>> >>>>>>>> welcome to our user list! >>>>>>>> >>>>>>>> The 670 GTX is a Kepler generation card with sm_30 so you are >> good >>>>> to go >>>>>>>> from the hardware side (we support sm_20 "Fermi" and upward): >>>>>>>> https://developer.nvidia.com/cuda-gpus >>>>>>>> >>>>>>>> We would recommend you to install a linux operation system, the >>>>> latest >>>>>>>> CUDA Toolkit >>>>>>>> https://developer.nvidia.com/cuda-downloads >>>>>>>> >>>>>>>> and the additional required tools and libraries documented >> here: >>>>>>>> >>>>>>>> >>>>>>>> >>>>> >>>> >> https://github.com/ComputationalRadiationPhysics/picongpu/blob/master/= doc/INSTALL.md#requirements >>>>>>>> >>>>>>>> They are all pretty standard and most get shipped with packages >>>>> too, >>>>>>>> e.g., in Debian, Ubuntu and Arch. Please read the instructions >> we >>>>>>>> provided in this file carefully. >>>>>>>> >>>>>>>> I also recommend you installing pngwriter as described under >>>>> "optional" >>>>>>>> since it allows an easy check of the output with our png >> (preview) >>>>>>>> plugin. >>>>>>>> >>>>>>>> If you installed the requirements, just scroll a bit down in >> the >>>>>>>> INSTALL.md guide and set up a simulation case. This is >> additionally >>>>>>>> documented in a youtube video >>>>>>>> https://www.youtube.com/watch?v=3D7ybsD8G4Rsk >>>>>>>> >>>>>>>> With the binary compiled, you can set plugins in the case's >> *.cfg >>>>> file >>>>>>>> when you are at this point. All available options are >> documented >>>>> here >>>>>>>> >>>>>>>> >>>>>>>> >>>>> >>>> >> https://github.com/ComputationalRadiationPhysics/picongpu/blob/master/= doc/TBG_macros.cfg >>>>>>>> >>>>>>>> and in the wiki >>>>>>>> >>>>>>>> >>>>>>>> >>>>> >>>> >> https://github.com/ComputationalRadiationPhysics/picongpu/wiki/PIConGP= U-Plugins >>>>>>>> >>>>>>>> The tbg template you want to use is "bash_mpirun.tpl" and set >> to "1 >>>>> 1 1" >>>>>>>> GPUs in your ".cfg" file. >>>>>>>> >>>>>>>> For further resources, please continue to read (links below): >>>>>>>> >>>>>>>> [0] README.md >>>>>>>> [1] doc/INSTALL.md >>>>>>>> [2] our wiki >>>>>>>> [3] doc/TBG_macros.cfg >>>>>>>> [4] closed questions in our issue tracker >>>>>>>> >>>>>>>> >>>>>>>> If there should pop up problems along the way, feel free to ask >>>>> again on >>>>>>>> the list! >>>>>>>> >>>>>>>> >>>>>>>> Best regards, >>>>>>>> Axel Huebl >>>>>>>> >>>>>>>> [0] >>>>>>>> >>>>>>>> >>>>> >>>> >> https://github.com/ComputationalRadiationPhysics/picongpu#picongpu---a= -many-gpgpu-pic-code >>>>>>>> [1] >>>>>>>> >>>>>>>> >>>>> >>>> >> https://github.com/ComputationalRadiationPhysics/picongpu/blob/master/= doc/INSTALL.md >>>>>>>> [2] >> https://github.com/ComputationalRadiationPhysics/picongpu/wiki >>>>>>>> [3] >>>>>>>> >>>>>>>> >>>>> >>>> >> https://github.com/ComputationalRadiationPhysics/picongpu/blob/master/= doc/TBG_macros.cfg >>>>>>>> [4] >>>>>>>> >>>>>>>> >>>>> >>>> >> https://github.com/ComputationalRadiationPhysics/picongpu/issues?q=3Di= s%3Aissue+label%3Aquestion+is%3Aclosed >>>>>>>> >>>>>>>> On 06.08.2015 11:21, k.manzoorolajdad wrote: >>>>>>>>> Hi >>>>>>>>> I am new in CUDA andComputational Radiation Physics and want >> use >>>>>>>>> picongpu but don't have gpu cluster. >>>>>>>>> I have single GPU (Geforce 670 gtx) and want test and run >> code. >>>>>>>>> >>>>>>>>> How can run code in single machine and single gpu? >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> >>>>>>>>> Manzoor >>>>>>>>> Ms.student of physic >>>>>>>>> Tehran university,Iran >>>>>>>> >>>>>>>> -- >>>>>>>> >>>>>>>> Axel Huebl >>>>>>>> Phone +49 351 260 3582 >>>>>>>> https://www.hzdr.de/crp >>>>>>>> Computational Radiation Physics >>>>>>>> Laser Particle Acceleration Division >>>>>>>> Helmholtz-Zentrum Dresden - Rossendorf e.V. >>>>>>>> >>>>>>>> Bautzner Landstrasse 400, 01328 Dresden >>>>>>>> POB 510119, D-01314 Dresden >>>>>>>> Vorstand: Prof. Dr.Dr.h.c. R. Sauerbrey >>>>>>>> Prof. Dr.Dr.h.c. P. Joehnk >>>>>>>> VR 1693 beim Amtsgericht Dresden >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>> >>>> >>>> ############################################################# >>>> This message is sent to you because you are subscribed to >>>> the mailing list . >>>> To unsubscribe, E-mail to: >>>> To switch to the DIGEST mode, E-mail to >> >>>> To switch to the INDEX mode, E-mail to >> >>>> Send administrative queries to >>>> >>>> >>> >>> ############################################################# >>> This message is sent to you because you are subscribed to >>> the mailing list . >>> To unsubscribe, E-mail to: >>> To switch to the DIGEST mode, E-mail to >> >>> To switch to the INDEX mode, E-mail to = >>> Send administrative queries to >>> >>> >>> >=20 >=20 > ############################################################# > This message is sent to you because you are subscribed to > the mailing list . > To unsubscribe, E-mail to: > To switch to the DIGEST mode, E-mail to = > To switch to the INDEX mode, E-mail to > Send administrative queries to >=20 --=20 Axel Huebl Phone +49 351 260 3582 https://www.hzdr.de/crp Computational Radiation Physics Laser Particle Acceleration Division Helmholtz-Zentrum Dresden - Rossendorf e.V. Bautzner Landstrasse 400, 01328 Dresden POB 510119, D-01314 Dresden Vorstand: Prof. Dr.Dr.h.c. R. Sauerbrey Prof. Dr.Dr.h.c. P. Joehnk VR 1693 beim Amtsgericht Dresden --------------ms060808080403040308000902 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIPrDCC BNUwggO9oAMCAQICCFBOxvU9EbRkMA0GCSqGSIb3DQEBCwUAMHExCzAJBgNVBAYTAkRFMRww GgYDVQQKExNEZXV0c2NoZSBUZWxla29tIEFHMR8wHQYDVQQLExZULVRlbGVTZWMgVHJ1c3Qg Q2VudGVyMSMwIQYDVQQDExpEZXV0c2NoZSBUZWxla29tIFJvb3QgQ0EgMjAeFw0xNDA3MjIx MjA4MjZaFw0xOTA3MDkyMzU5MDBaMFoxCzAJBgNVBAYTAkRFMRMwEQYDVQQKEwpERk4tVmVy ZWluMRAwDgYDVQQLEwdERk4tUEtJMSQwIgYDVQQDExtERk4tVmVyZWluIFBDQSBHbG9iYWwg LSBHMDEwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDpm8NnhfkNrvWNVMOWUDU9 YuluTO2U1wBblSJ01CDrNI/W7MAxBAuZgeKmFNJSoCgjhIt0iQReW+DieMF4yxbLKDU5ey2Q RdDtoAB6fL9KDhsAw4bpXCsxEXsM84IkQ4wcOItqaACa7txPeKvSxhObdq3u3ibo7wGvdA/B CaL2a869080UME/15eOkyGKbghoDJzANAmVgTe3RCSMqljVYJ9N2xnG2kB3E7f81hn1vM7Pb D8URwoqDoZRdQWvY0hD1TP3KUazZve+Sg7va64sWVlZDz+HVEz2mHycwzUlU28kTNJpxdcVs 6qcLmPkhnSevPqM5OUhqjK3JmfvDEvK9AgMBAAGjggGGMIIBgjAOBgNVHQ8BAf8EBAMCAQYw HQYDVR0OBBYEFEm3xs/oPR9/6kR7Eyn38QpwPt5kMB8GA1UdIwQYMBaAFDHDeRu69VPXF+CJ ei0XbAqzK50zMBIGA1UdEwEB/wQIMAYBAf8CAQIwYgYDVR0gBFswWTARBg8rBgEEAYGtIYIs AQEEAgIwEQYPKwYBBAGBrSGCLAEBBAMAMBEGDysGAQQBga0hgiwBAQQDATAPBg0rBgEEAYGt IYIsAQEEMA0GCysGAQQBga0hgiweMD4GA1UdHwQ3MDUwM6AxoC+GLWh0dHA6Ly9wa2kwMzM2 LnRlbGVzZWMuZGUvcmwvRFRfUk9PVF9DQV8yLmNybDB4BggrBgEFBQcBAQRsMGowLAYIKwYB BQUHMAGGIGh0dHA6Ly9vY3NwMDMzNi50ZWxlc2VjLmRlL29jc3ByMDoGCCsGAQUFBzAChi5o dHRwOi8vcGtpMDMzNi50ZWxlc2VjLmRlL2NydC9EVF9ST09UX0NBXzIuY2VyMA0GCSqGSIb3 DQEBCwUAA4IBAQBjICj9nCGGcr45Rlk5MiW8qQGbDczKfUGchm0KbiyzE1l1sTOSG2EnFv/D stU1gvuEKgFJvWa7Zi+ywgZdbj9u4wFaW8pDY1yVtuExpx/VB19N5mWCTjL5w3x6S81NXHTu IfJ1AuxSPtLJatOQI25JZzW+f01WpOzML8+3oZeocj7JvEDWWqQIPda8gsO3tzKOsSyOam23 NQIZz/U5RFhjpyQAELC7/E6vbi84u6VXST/YblBvLJeW3B1GmmWJz67M8uXZn1OzPqEvkqnY C8aEHwTG6x7on321e6UC8STFJGMRNMxakyAqeYg6JUKQqWU7fIbTEhUjKfws2sw5W1QXMIIF YzCCBEugAwIBAgIHFj6RQyeZKDANBgkqhkiG9w0BAQUFADCBlDELMAkGA1UEBhMCREUxMjAw BgNVBAoTKUZvcnNjaHVuZ3N6ZW50cnVtIERyZXNkZW4tUm9zc2VuZG9yZiBlLlYuMSAwHgYD VQQLExdJbmZvcm1hdGlvbnN0ZWNobm9sb2dpZTEVMBMGA1UEAxMMRlpELUNBIC0gRzAyMRgw FgYJKoZIhvcNAQkBFglyYUBmemQuZGUwHhcNMTMwODI5MDkyNjExWhcNMTYwODI4MDkyNjEx WjBhMQswCQYDVQQGEwJERTE8MDoGA1UEChMzSGVsbWhvbHR6LVplbnRydW0gRHJlc2RlbiAt IFJvc3NlbmRvcmYgZS4gVi4gKEhaRFIpMRQwEgYDVQQDEwtIdWVibCwgQXhlbDCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBANWIEuWannP8AfgdPc+sHhQjnwfs9bpNZXuptGRT 5iX4mwlPECrDOLBgfszBke+NGgXKJz1moIRZ8wvatJDQ8OTCbENoa1gmpCKZ4ryB+3XSxl+r BbM2eH7koCeagqbifypdoElI4wtc3QRLs8ZURhxfjWn+Vv3qs5od7HVypVka8WqLkqS6LgFL /LGyp+uqV0m778ExVaoWghXronhcDk10nUJSaWqHLCpHvWv/6fB8Tf08hNWRVB5ilUHHBWvu vCAQjhSbH4YFFoZDXB5PcFhvNOnT5cOIrTf5XMPMtubAvZ14S+wtkt9eDMZSOyT0KOkGw1cE 2X54vrdvkUHa/IMCAwEAAaOCAeowggHmMC8GA1UdIAQoMCYwEQYPKwYBBAGBrSGCLAEBBAMA MBEGDysGAQQBga0hgiwCAQQDADAJBgNVHRMEAjAAMAsGA1UdDwQEAwIF4DApBgNVHSUEIjAg BggrBgEFBQcDAgYIKwYBBQUHAwQGCisGAQQBgjcUAgIwHQYDVR0OBBYEFK228r82UkHXs2AB GJRtWlvbnP+IMB8GA1UdIwQYMBaAFKUpSPWVmRi1PjfIhgKKTr9C8B2MMBoGA1UdEQQTMBGB D2EuaHVlYmxAaHpkci5kZTB7BgNVHR8EdDByMDegNaAzhjFodHRwOi8vY2RwMS5wY2EuZGZu LmRlL2Z6ZC1jYS9wdWIvY3JsL2dfY2FjcmwuY3JsMDegNaAzhjFodHRwOi8vY2RwMi5wY2Eu ZGZuLmRlL2Z6ZC1jYS9wdWIvY3JsL2dfY2FjcmwuY3JsMIGWBggrBgEFBQcBAQSBiTCBhjBB BggrBgEFBQcwAoY1aHR0cDovL2NkcDEucGNhLmRmbi5kZS9memQtY2EvcHViL2NhY2VydC9n X2NhY2VydC5jcnQwQQYIKwYBBQUHMAKGNWh0dHA6Ly9jZHAyLnBjYS5kZm4uZGUvZnpkLWNh L3B1Yi9jYWNlcnQvZ19jYWNlcnQuY3J0MA0GCSqGSIb3DQEBBQUAA4IBAQA9Q7+cxRoFjWw8 oc1otv9P7yBdtY+JAVE1mmEGzeU/Tqqupe+/3N/e4euqPqzbTgcaw/H0e7K831cCe53ux+CB zJZH+kSkY+bqX+SxP8ndgRBDVFe6SvL+RiO49xOB4irg+a6otTWjuI8pUDTqUjGht82MD/rf 1gAUXRxPRU2nrt7BtyYtNKvP14xxnZrghvx/DZ6YsaVV14w1SpmtiiJ6q2WsbtiWKnoNCrWA 3Qr/9GK1tzGKQWHq21KDIm0bysjUt1hcwowM8OtPAQO66ywD72aR+NQB+LKTAnmPHBIgC7uW nLNpR9+CPubwTlq4z4f58uqnF5+JjiRgnfhJZGpvMIIFaDCCBFCgAwIBAgIHF5Bg2EcW5zAN BgkqhkiG9w0BAQsFADBaMQswCQYDVQQGEwJERTETMBEGA1UEChMKREZOLVZlcmVpbjEQMA4G A1UECxMHREZOLVBLSTEkMCIGA1UEAxMbREZOLVZlcmVpbiBQQ0EgR2xvYmFsIC0gRzAxMB4X DTE0MDUxMjE1MDU0NFoXDTE5MDcwOTIzNTkwMFowgZQxCzAJBgNVBAYTAkRFMTIwMAYDVQQK EylGb3JzY2h1bmdzemVudHJ1bSBEcmVzZGVuLVJvc3NlbmRvcmYgZS5WLjEgMB4GA1UECxMX SW5mb3JtYXRpb25zdGVjaG5vbG9naWUxFTATBgNVBAMTDEZaRC1DQSAtIEcwMjEYMBYGCSqG SIb3DQEJARYJcmFAZnpkLmRlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA8m1G eCgBDpPgT8IUvZ2FQLJbvz6fpH1JA+DlZgNog4uFLue/6Lh9pT8EvcNbor8Qb2rt9rwRbk3p 3WqEc7AUZdxUY9ZGaYqcR9BUwHaFyaEcSYmKTfQ2scoxiazM+rmxu+UXtF/wBh8kHo9CSsY4 eWy1GSDkYksZlWhqOImpVkQmEWgoqu7brg60ug6YtbzE3n93MVQRQF4bFS2Kxvh+Rpq7u9eP 1UYIqG0+0G7T3v9CT/a7gw7qGQGkryoE+G3YoRq/1KM4+LZVWKgQzLGH3XMzgSbGlXzWfJ/a WsD88UkKyKjMiFeUisbO7yyM5FhYaZT2f6ZM25AuBvI+P+ZhOQIDAQABo4IB9jCCAfIwEgYD VR0TAQH/BAgwBgEB/wIBATAOBgNVHQ8BAf8EBAMCAQYwEQYDVR0gBAowCDAGBgRVHSAAMB0G A1UdDgQWBBSlKUj1lZkYtT43yIYCik6/QvAdjDAfBgNVHSMEGDAWgBRJt8bP6D0ff+pEexMp 9/EKcD7eZDAUBgNVHREEDTALgQlyYUBmemQuZGUwgYgGA1UdHwSBgDB+MD2gO6A5hjdodHRw Oi8vY2RwMS5wY2EuZGZuLmRlL2dsb2JhbC1yb290LWNhL3B1Yi9jcmwvY2FjcmwuY3JsMD2g O6A5hjdodHRwOi8vY2RwMi5wY2EuZGZuLmRlL2dsb2JhbC1yb290LWNhL3B1Yi9jcmwvY2Fj cmwuY3JsMIHXBggrBgEFBQcBAQSByjCBxzAzBggrBgEFBQcwAYYnaHR0cDovL29jc3AucGNh LmRmbi5kZS9PQ1NQLVNlcnZlci9PQ1NQMEcGCCsGAQUFBzAChjtodHRwOi8vY2RwMS5wY2Eu ZGZuLmRlL2dsb2JhbC1yb290LWNhL3B1Yi9jYWNlcnQvY2FjZXJ0LmNydDBHBggrBgEFBQcw AoY7aHR0cDovL2NkcDIucGNhLmRmbi5kZS9nbG9iYWwtcm9vdC1jYS9wdWIvY2FjZXJ0L2Nh Y2VydC5jcnQwDQYJKoZIhvcNAQELBQADggEBAC9Y+prXCAxJzhcGTqHbUWZN0BbctjNv4zor znGPFZ42NVfCSqR9gIRiwnDBYlBJ+Q+PppFZNE/97E1XCmk/iFWE89wWtEfTem5OPjKej3Ff nuVCl1e11o8re9j91KC2Sv4B5tXVwZ1C0tjJDFvA0c4g42pce38LNR5kpuGPeGXrDCscTF0R 1eTWzt2OLBPVjLl43Sf8RbIM+R0s1VSlb/YGnVLUAK8TcRuoNgDwIFa7uxtC7DP2c+WrQt4l ESADB2finpsWW9a+prAH6RWd8PuW3lPgXBd7vum8wpXfSVF3oXvPakBQaz57Dr9EC6GMMkKy 8PRxj3Ak0/XQAIKJWPoxggQBMIID/QIBATCBoDCBlDELMAkGA1UEBhMCREUxMjAwBgNVBAoT KUZvcnNjaHVuZ3N6ZW50cnVtIERyZXNkZW4tUm9zc2VuZG9yZiBlLlYuMSAwHgYDVQQLExdJ bmZvcm1hdGlvbnN0ZWNobm9sb2dpZTEVMBMGA1UEAxMMRlpELUNBIC0gRzAyMRgwFgYJKoZI hvcNAQkBFglyYUBmemQuZGUCBxY+kUMnmSgwCQYFKw4DAhoFAKCCAjUwGAYJKoZIhvcNAQkD MQsGCSqGSIb3DQEHATAcBgkqhkiG9w0BCQUxDxcNMTUwODEwMTEzODI4WjAjBgkqhkiG9w0B CQQxFgQU/VagHk+b5xhTtog/hx40Hm7DdUAwbAYJKoZIhvcNAQkPMV8wXTALBglghkgBZQME ASowCwYJYIZIAWUDBAECMAoGCCqGSIb3DQMHMA4GCCqGSIb3DQMCAgIAgDANBggqhkiG9w0D AgIBQDAHBgUrDgMCBzANBggqhkiG9w0DAgIBKDCBsQYJKwYBBAGCNxAEMYGjMIGgMIGUMQsw CQYDVQQGEwJERTEyMDAGA1UEChMpRm9yc2NodW5nc3plbnRydW0gRHJlc2Rlbi1Sb3NzZW5k b3JmIGUuVi4xIDAeBgNVBAsTF0luZm9ybWF0aW9uc3RlY2hub2xvZ2llMRUwEwYDVQQDEwxG WkQtQ0EgLSBHMDIxGDAWBgkqhkiG9w0BCQEWCXJhQGZ6ZC5kZQIHFj6RQyeZKDCBswYLKoZI hvcNAQkQAgsxgaOggaAwgZQxCzAJBgNVBAYTAkRFMTIwMAYDVQQKEylGb3JzY2h1bmdzemVu dHJ1bSBEcmVzZGVuLVJvc3NlbmRvcmYgZS5WLjEgMB4GA1UECxMXSW5mb3JtYXRpb25zdGVj aG5vbG9naWUxFTATBgNVBAMTDEZaRC1DQSAtIEcwMjEYMBYGCSqGSIb3DQEJARYJcmFAZnpk LmRlAgcWPpFDJ5koMA0GCSqGSIb3DQEBAQUABIIBAARLv2/XMjmaHnjY3E5GyO5QTZaG7bUJ Qy9tdn/hVEgNB7OEXORFRp3DSYbiLPGgrtu5nZBXXff/cm+6d/dLWhOFYON4fCBUTZtMh97A Ml9063FMfhODewktPl6eMsfLL1GsayIVTvtgNXiCtTKl1RoWGBtJX6dcBStlheX0agGRelEn 2xPOZb/vJaqN8/LZ1O9mxf+6mEyIUELFW5KzJPDe9TEJY0ywsBSvSv0mPVMXYiF8ekSS72ci cST1qfZDcel2ZcsI4JFWK7NR1Yr6hjTH4XS1qfvSA6Yc4SHEoqMXmWPRphqZGCPnrW/qiFJp cF+ZxPlkWkSRoTVWpBP1BdkAAAAAAAA= --------------ms060808080403040308000902--