computationalradiationphysics / clara2 Goto Github PK
View Code? Open in Web Editor NEWClara2 - a parallel classical radiation calculator based on Liénard-Wiechert potentials
License: GNU General Public License v3.0
Clara2 - a parallel classical radiation calculator based on Liénard-Wiechert potentials
License: GNU General Public License v3.0
The function file_exists()
is defined in single_trace.hpp and single_trace.cpp. This is misleading and the code should be placed in its own file. This also improves reuseability.
write the wiki documenation for developesrs and users
In prepare_job.sh
make
is called twice - once without argument - the second time with a parallelization argument. This should be cleaned up.
There are several warnings with gcc 4.6.2
when compiling clara2:
main.cpp: In function ‘int main()’:
main.cpp:58:7: warning: variable ‘return_value’ set but not used [-Wunused-but-set-variable]
The function single_trace
in single_trace.cpp still contains code to handle several directions.
This needs to be cleaned up. Perhaps even rename the files.
The interface of get_spectrum is outdated. It would be better to write to methods that get either the frequency or the spectrum.
Hello,
I'm trying to calculate the Thomson and Compton scatter, so I have find the program CLARA2 in github. But when I compile this program in supercomputer center by the tutorial in the website , some problems occurred as following:
siom003@login3 ~/qzy/CLARA2/clara2-0.1.0/src> ./prepare_job.sh
for MPI enter 1
for PBS-Array jobs enter 2
1
MPI choosen
clara2_hypnos.modules: line 1: /etc/profile.modules: No such file or directory
ModuleCmd_Load.c(204):ERROR:105: Unable to locate a modulefile for 'gcc/4.6.2'
ModuleCmd_Load.c(204):ERROR:105: Unable to locate a modulefile for 'infiniband/1 .0.0'
ModuleCmd_Load.c(204):ERROR:105: Unable to locate a modulefile for 'openmpi/1.6. 0'
ModuleCmd_Load.c(204):ERROR:105: Unable to locate a modulefile for 'fftw/3.3.4'
ModuleCmd_Load.c(204):ERROR:105: Unable to locate a modulefile for 'editor/emacs '
make -C ./include/
make[1]: Entering directory `/public/home/users/siom003/qzy/CLARA2/clara2-0.1.0/ src/include'
g++ -Wall -O3 -c detector_e_field.cpp
g++ -Wall -O3 -c detector_dft.cpp
g++ -Wall -O3 -c -lfftw3 -lm detector_fft.cpp
In file included from detector_fft.cpp:27:
ned_fft.hpp: In member function ‘void ned_FFT<A, T>::fft(T*, long unsigned int )’:
ned_fft.hpp:103: error: ‘input’ was not declared in this scope
ned_fft.hpp:104: error: ‘output’ was not declared in this scope
ned_fft.hpp: In member function ‘void ned_FFT<A, T>::fft(T*, long unsigned int ) [with A = double, T = Vector<double, 3u>]’:
ned_fft.hpp:71: instantiated from ‘ned_FFT<A, T>::ned_FFT(unsigned int, A*, T*) [with A = double, T = Vector<double, 3u>]’
detector_fft.cpp:155: instantiated from here
ned_fft.hpp:102: warning: unused variable ‘in’
ned_fft.hpp:102: warning: unused variable ‘out’
make[1]: *** [detector_fft.o] Error 1
make[1]: Leaving directory `/public/home/users/siom003/qzy/CLARA2/clara2-0.1.0/s rc/include'
make: *** [subsystem] Error 2
mpic++ -Wall -O3 -lfftw3 -lm -D__PARALLEL_SETTING__=1 -c -fopenmp -lz main.cpp
icpc: warning #10315: specifying -lm before files may supersede the Intel(R) mat h library and affect performance
g++ -Wall -O3 -lfftw3 -lm -c -fopenmp -lz -I./include/ all_directions.cpp
make: *** No rule to make target `include/libDetector.a', needed by `MPI'. Stop .
siom003@login3 ~/qzy/CLARA2/clara2-0.1.0/src> cd include/
siom003@login3 ~/qzy/CLARA2/clara2-0.1.0/src/include> vi ned_fft.hpp
siom003@login3 ~/qzy/CLARA2/clara2-0.1.0/src/include> cd ..
siom003@login3 ~/qzy/CLARA2/clara2-0.1.0/src> make clean
rm -f *o
rm executable
rm: cannot remove `executable': No such file or directory
make: *** [clean] Error 1
Can you help me to solve this problem? Thank you very much!
With default modules, clara2 does not seem to compile due to a missing zlib
.
This can be solved by choosing suitable modules.
But then the following error might occure, do to differnt module setup during compile and run time.
/var/spool/torque/mom_priv/jobs/10086.hypnos2.SC: Zeile 11: /home/rp6038/picongu.profile: Datei oder Verzeichnis nicht gefunden
/var/spool/torque/mom_priv/jobs/10086.hypnos2.SC: Zeile 13: mpiexec: Kommando nicht gefunden.
Either there is a problem with loading something from ~
or modules are set up wrongly.
Setting up modules both for compile and run time should be automated.
Currently, both C-style error handling, using return ≠ 0
, and C++-style error handling, with throw "error name"
, are used concurrently.
A more consistent error handling using C++-style should be used.
Example file: gzip_lib.hpp
Fix alignment and white space in all code files in src/
The using namespace std
should be removed - at least at global level.
Build an library interface of clara2 that can be used as plugin in GPT (General Particle Tracker)
Hide setup variables behind name space
We should support to option to export the amplitudes (C^3) instead of just the magnitude (R^1) in order to take phase and polarization into account.
mentioning:
@joyiuac
@BeyondEspresso
move trace file location into setup file
remove all unnecessary code in the current code base
I would like to add the python scripts used in picongpu for analyzing the output.
@ax3l What do I have to do concerning the license header?
Leave it as it is (picongpu
), change it to clara2
or mention both?
As suggested by @psychocoderHPC and @ax3l switching to a cmake (3.0+) approach for building clara2 would ease the setup with module systems and would allow building on Windows and OSX machines as well.
Main issue with cmake is that fftw has no default finder.
comment the existing code base
During the development of Clara2, I tested a parallelization over directions.
Each MPI task ran through all traces and computed the total radiation spectra for only a single direction. This caused a lot of data traffic and overloaded the network of hypnos. Therefore, the current parallelization distributes work for each trajectory.
The old parallelization scheme is still supported by the code, but is currently not activ.
This can be seen f.e. in convert_to_matrix.cpp.
Removing it completely will make the code slimmer and better maintainable. But we will loose flexibility.
I personally tend toward removing the code since it is currently not well written anyways. In case of a change in parallelization, one should/can rewrite the structure.
@BeyondEspresso @TheresaBruemmer @bussmann
What do you think?
Currently detector_e_field
is not used in the code. But I did not remove the code, because I belief it might be useful for future research again (see argument issue #5).
The detector_e_field
should be selectable and easy to set up.
@belfhi provided some code so that clara2 runs on the Maxwell cluster at DESY. This should be included into the mainline code.
As needed by @QJohn2017 a setup script for tianhe2 is needed (see #89). Since tianhe2 uses SLURM for scheduling, either the ./prepare_job
script needs to be adjusted to an ./prepare_job_tianhe2.sh
script which creates a SLURM submit file or we go directly with an submit file that focuses on MPI jobs only (since tianhe2 is large and probably has to handle quite a lot of jobs, MPI is probably the better choice for this system).
Additionally, submit scripts for other clusters (taurus, PizDaint, etc.) should be provided.
@QJohn2017 Are you planing to submit with MPI only or are you also considering running SLURM Array jobs?
If only an MPI job is planned, a simple submit script should be sufficient.
Create a single param file, that is namespace protected, to be used in the entire simulation. This should be similar to the param files in PIConGPU. This is supposed to simplify the setup of simulations and make a later retrieval of the simulation parameters easier.
The function run_through_data
from run_through_data.hpp is called for each direction.
(see single_trace.cpp and all_directions.cpp)
This is inefficient and should be changed to a single call of the function.
Dear users,
is anybody relying on the array job support? I have not used it for years and I would be very happy to rely entirely on MPI. This would allow a more lightweight code and setup routine and additionally would support getting rid of process_data
and using direct MPI gather instead of storing spectra for each trajectory and disk.
The only advantage of array jobs is that they might start faster an very small clusters that are overbooked.
@TheresaBruemmer Would this be the case for Maxwell?
On Hypnos, Taurus and PizDaint this would not be an issue.
@QJohn2017 I assume since you are using the second largest cluster in the world, it is not an issue for you too. Is this correct?
As @QJohn2017 mentioned in #89 there are several warnings when icpc
is used instead of open mpi
. This should be resolved for future use.
The warnings for
mpic++ -Wall -O3 -lfftw3 -lm -D__PARALLEL_SETTING__=1 -c -fopenmp -lz main.cpp
were:
icpc: warning #10314: specifying -lm before object files may supercede the Intel(R) math library and affect performance
icpc: command line warning #10006: ignoring unknown option '-fopenmp'
#pragma once
parallel_jobs.h(22): remark #1782: #pragma once is obsolete. Use #ifndef guard instead.
#pragma once
^
parallel_jobs.h(54): remark #1418: external function definition with no prior declaration
int start_array(int* numtasks,
^
parallel_jobs.h(107): remark #1418: external function definition with no prior declaration
int end_array(void)
^
parallel_jobs.h(130): remark #1418: external function definition with no prior declaration
int check_break(void)
^
all_directions.hpp(21): remark #1782: #pragma once is obsolete. Use #ifndef guard instead.
#pragma once
^
main.cpp(71): remark #181: argument is incompatible with corresponding format string conversion
printf("this is job %5d of %5d jobs in the array (on %s = rank: %d)\n", i, N_max, pHost, rank);
^
main.cpp(71): remark #181: argument is incompatible with corresponding format string conversion
printf("this is job %5d of %5d jobs in the array (on %s = rank: %d)\n", i, N_max, pHost, rank);
^
and (as in all versions) in during:
g++ -Wall -O3 -lfftw3 -lm -c -fopenmp -I./include/ single_direction.cpp
the warning:
In file included from single_direction.cpp:35:
run_through_data.hpp: In function ‘void run_through_data(const one_line*, unsigned int, DET) [with DET = Detector_fft*]’:
run_through_data.hpp:56: warning: ‘time_fill.Discrete<double>::future’ may be used uninitialized in this function
run_through_data.hpp:56: warning: ‘time_fill.Discrete<double>::now’ may be used uninitialized in this function
run_through_data.hpp:56: warning: ‘time_fill.Discrete<double>::old’ may be used uninitialized in this function
Currently trajectory input data is converted in run_through_data.hpp
via some constants defined in settings.hpp
. However a more general approach would be useful.
Instead of storing data on the file system, it would be better to use mpi-reduces instead to reduce the access to the file system.
In the DFT and FFT detector interface of add_to_spectrum
several parameters are used that are probably not needed. The number of parameters should be reduced.
I think the descretization of time which is used to write the the signal
array in detector_e_field
is off by one delta_t
.
There can be NaN
value in the spectrum, which destroy the total result of used with process_data
.
I suspect they come from the Nyquist limit which is avoided in the interpolation. They should be set to zero.
Currently the detector direction setup is distributed between all_direction.cpp
and single_direction.cpp
. This should be combined as a functor that allows setting the observation direction (and later output format).
The struct
one_line
(in import_from_file
) should be a template to not only depend on double
input.
Furthermore the name one_line
and the file name is misleading and should be replaced by something more clear.
The computation of stepwidth
in single_trace.cpp still requires to set up the index of the time in const one_line* data
. This is a very inconvenient way to compute the width of the time step of the trajectory.
A better way to organize the data should be developed.
This is a continuation of issue #93
Since the discussion changed from how to set up the environment on tianhe2 to how to set up Clara2 in general, I think a new issue should be used (to avid scrolling down through all the issues related to the module system on tinahe2)
This question was asked by @QJohn2017 in #93
In the beginning of settings.hpp
you find the basic parameters:
omega_max is the maximum frequency of radiation you want to resolve
so your frequency range will start at 0.0
1/s and go to omega_max
1/s
theta_max is the maximum opening angle of your detector in degree
Setting the detector is not yet generic and requires some code adjustments we will discuss later
N_spectrum is the number of frequencies you want to sample between 0.0
1/s and go to omega_max
1/s
N_theta is the number of angle values between 0
degree and theta_max
degree you want to sample
N_phi is the number of angles you want to sample orthogonally to theta
N_trace is the number of trajectories provided that you want to calculate the spectrum for.
fft_length_factor is a factor to increase the sampling of the trajectories via linear interpolation. This is essential when you undergo highly-nonlinear Thomson scattering for only a brief period of time compared to the entire duration of the trajectory (see https://doi.org/10.5281/zenodo.843510 section 4.1.3)
ascii_output if true returns the spectra as text file, if false returns the spectra as binary file
N_char_filename the number of characters that are needed to give the trajectory path. 256
are usually sufficient, but if you have long directory names, more might be needed
traceFileTemplate the template for locating your input trajectories with [C-style])http://www.cplusplus.com/reference/cstdio/printf/) replacements
outputFileTemplate the template for the output spectra files with C-style replacements
Currently data is loaded in run_through_data.hpp where many manual unit conversions are required.
This should be set up more easily via the setting.hpp
file.
(derived from #11)
Write frequency setup routines similar to PIConGPU that allows an easy setup of the frequency of a detector.
Add a selector between DFT and FFT detector.
In clara2, there is still a DFT method running on the CPU. I am not shure, if we should still support this.
@BeyondEspresso @TheresaBruemmer @bussmann
What do you think?
set up automated compile tests for each pull request
The library gzip_lib.hpp contains not only file handling functions for compressed files but also normal bianry file handling functions.
For cleaner naming, the functions not associated with compressed files should go to another *.hpp
file.
Currently the frequency bin width used to calculate the total energy radiated in a given direction is set by an arbitrary index:
result *= (frequency[7] - frequency[6]);
This should be avoided - to avoid arbitrary frequency scales.
Similar to PIConGPU, an automation process should be developed that
In the code and the output there is a typo writing asci
instead of ascii
.
Allow comments in trace files to allow identification of columns by user
Add compile test system to check pull request for consistency.
Currently #include
is used both in *.hpp
and *.cpp
files without real structure. This should be restructured so that only a minimal set of #include
is used in *.hpp
files to avoid compile conflicts.
https://github.com/ComputationalRadiationPhysics/clara2/wiki/GitHub-workflow-in-a-nutshell
Please insert the following instruction between steps 3) and 4):
cd clara2
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.