xcompact3d / incompact3d Goto Github PK
View Code? Open in Web Editor NEWCurrent CPU version of our solver for the Navier-Stokes equations
Home Page: https://xcompact3d.readthedocs.io/en/latest/
License: BSD 3-Clause "New" or "Revised" License
Current CPU version of our solver for the Navier-Stokes equations
Home Page: https://xcompact3d.readthedocs.io/en/latest/
License: BSD 3-Clause "New" or "Revised" License
The lock-exchange flow configuration is not working when ilmn
is False.
We get this error:
Xcompact3d is run with the default file -->input.i3d
===========================================================
======================Xcompact3D===========================
===Copyright (c) 2018 Eric Lamballais and Sylvain Laizet===
===Modified by Felipe Schuch and Ricardo Frantz============
===Modified by Paul Bartholomew, Georgios Deskos and=======
===Sylvain Laizet -- 2018- ================================
===========================================================
Git version : v3.0-397-gff531df
===========================================================
Simulating lock-exchange
===========================================================
Reynolds number Re : 2236.000
xnu : 0.00044723
===========================================================
p_row, p_col : 0 0
===========================================================
Time step dt : 0.00480000
Temporal scheme : Adams-bashforth 2
===========================================================
ifirst : 1
ilast : 100000
===========================================================
Lx : 18.00000000
Ly : 2.00000000
Lz : 2.00000000
nx : 181
ny : 29
nz : 27
===========================================================
istret : 0
beta : 0.25906515
===========================================================
nu0nu : 4.00000000
cnu : 0.44000000
===========================================================
Scalar : off
numscalar : 0
===========================================================
spinup_time : 0
wrotation : 0.00000000
===========================================================
Immersed boundary : off
===========================================================
Boundary condition velocity field:
nclx1, nclxn : 1,1
ncly1, nclyn : 2,1
nclz1, nclzn : 1,1
===========================================================
Numerical precision: Double
===========================================================
High and low speed : u1= 2.00 and u2= 1.00
Gravity vector : (gx, gy, gz)=( 0.00000000, -1.00000000, 0.00000000)
Initial front location: 1.0000000000000000
===========================================================
In auto-tuning mode......
factors: 1
processor grid 1 by 1 time= 3.2067000000068901E-003
the best processor grid is probably 1 by 1
Initializing variables...
Using the hyperviscous operator with (nu_0/nu,c_nu) = ( 4.0000000000000000 , 0.44000000000000000 )
Using the hyperviscous operator with (nu_0/nu,c_nu) = ( 4.0000000000000000 , 0.44000000000000000 )
Using the hyperviscous operator with (nu_0/nu,c_nu) = ( 4.0000000000000000 , 0.44000000000000000 )
===========================================================
Visu module requires 0.267573029 GB
===========================================================
===========================================================
Diffusion number
cfl_diff_x : 0.00021467
cfl_diff_y : 0.00042075
cfl_diff_z : 0.00036279
cfl_diff_sum : 0.00099821
===========================================================
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7f07f6c802ed in ???
#1 0x7f07f6c7f503 in ???
#2 0x7f07f62fd03f in ???
#3 0x7f07f644cbe2 in ???
#4 0x55a8f26b4b5c in __lockexch_MOD_set_fluid_properties_lockexch
at src/BC-Lock-exchange.f90:754
#5 0x55a8f26b4b5c in __lockexch_MOD_init_lockexch
at src/BC-Lock-exchange.f90:175
#6 0x55a8f277a554 in __case_MOD_init
at src/case.f90:93
#7 0x55a8f27e81fe in init_xcompact3d_
at src/xcompact3d.f90:237
#8 0x55a8f2402918 in xcompact3d
at src/xcompact3d.f90:50
#9 0x55a8f2402918 in main
at src/xcompact3d.f90:35
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node DESKTOP-RD7V3T4 exited on signal 11 (Segmentation fault).
Take the Lock-exchange/input.i3d and change:
- ilmn = .TRUE. ! Enable low Mach number
+ ilmn = .FALSE. ! Enable low Mach number
Then run the simulation.
I see two options:
Allocate mu1
anyway. Right now the code at src/variables.f90 is:
if (ilmn) then
call alloc_x(mu1)
mu1(:,:,:) = one
endif
Or protect references to mu1
at src/BC-Lock-exchange.f90 with if (ilmn) then
.
I personally feel, there is a great need of proper documentation of the software for beginners. The current documentation can be expanded to cover all the new aspects and FAQs. More information about the statistics and visu output is required. This is just a suggestion. If required I can volunteer for the same.
Thanks,
Gaurav
This is an output from the example wind turbine ADM simulation:
Time step = 39975/ 400000, Time unit =9993.7500
......
===========================================================
Time step = 40000/ 400000, Time unit =*********
As seen the Time unit field becomes too large to display.
This is the corresponding source code:
tools.f90: write(*,"(' Time step =',i7,'/',i7,', Time unit =',F9.4)") itime,ilast,t
Hello,
As discussed during the last showcase event (27-28 April 2023, Imperial College London), it would be interesting to implement the pipe flow into the main branch of the code, given its academic nature and importance among the canonical wall-bounded flows. Through my thesis, I have worked on the full implementation of this type of flow in Xcompact3d while including heat transfer with the pipe geometry being represented by the Lagrange polynomial immersed boundary method iibm=2
.
I started the implementation of the pipe flow in 2019, starting from the version of the code available in the main branch at that time. As the ultimate goal of my thesis was the introduction of a IB-based numerical strategy for Conjugate Heat Transfer simulations, many developments were made throughout these 4 years, including viscous filtering and fluid-solid thermal coupling for complex geometry. This version of the code is fully operational and I believe it can serve as a basis to guide the implementation in the main branch.
As we discussed during the showcase event, perhaps we could start with the basic implementation/validation of the velocity solution alone as a first step (geometry creation in the frame of IBM, laminar/turbulent initial conditions, ...) before moving on to the heat transfer implementation. Naturally, I'm willing to restart from the basics and advance step-by-step, according to the guidelines you provide to implement this flow configuration.
Kind regards,
Rodrigo Vicente Cruz
Hello! Trying to compile in single with Intel or GCC and it's impossible.
mpiifort -fpp -O3 -ipo -fp-model fast=2 -mcmodel=large -safe-cray-ptr -I/lib -qmkl -I./src -I./decomp2d -fpp -O3 -ipo -fp-model fast=2 -mcmodel=large -safe-cray-ptr -I/lib -qmkl -DVERSION=\"\" -I/opt/intel/oneapi/mkl/2021.4.0/include -c src/ibm.f90
src/ibm.f90(533): error #6633: The type of the actual argument differs from the type of the dummy argument. [XA]
call cubic_spline(xa,ya,na,xpol,ypol)
----------------------------------------------^
src/ibm.f90(533): error #6633: The type of the actual argument differs from the type of the dummy argument. [YA]
call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------^
src/ibm.f90(533): error #6633: The type of the actual argument differs from the type of the dummy argument. [XPOL]
call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------------^
src/ibm.f90(533): error #6633: The type of the actual argument differs from the type of the dummy argument. [YPOL]
call cubic_spline(xa,ya,na,xpol,ypol)
------------------------------------------------------------^
src/ibm.f90(697): error #6633: The type of the actual argument differs from the type of the dummy argument. [XA]
call cubic_spline(xa,ya,na,xpol,ypol)
----------------------------------------------^
src/ibm.f90(697): error #6633: The type of the actual argument differs from the type of the dummy argument. [YA]
call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------^
src/ibm.f90(697): error #6633: The type of the actual argument differs from the type of the dummy argument. [XPOL]
call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------------^
src/ibm.f90(697): error #6633: The type of the actual argument differs from the type of the dummy argument. [YPOL]
call cubic_spline(xa,ya,na,xpol,ypol)
------------------------------------------------------------^
src/ibm.f90(850): error #6633: The type of the actual argument differs from the type of the dummy argument. [XA]
call cubic_spline(xa,ya,na,xpol,ypol)
----------------------------------------------------^
src/ibm.f90(850): error #6633: The type of the actual argument differs from the type of the dummy argument. [YA]
call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------------^
src/ibm.f90(850): error #6633: The type of the actual argument differs from the type of the dummy argument. [XPOL]
call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------------------^
src/ibm.f90(850): error #6633: The type of the actual argument differs from the type of the dummy argument. [YPOL]
call cubic_spline(xa,ya,na,xpol,ypol)
------------------------------------------------------------------^
....
src/acl_utils.f90(94): error #8209: If type specification is omitted, each element in an array-constructor must have the same type and kind type parameters. [VOX]
p=reshape([0.0d0,vOx,vOy,vOz],[4,1])
--------------------^
src/acl_utils.f90(94): error #8209: If type specification is omitted, each element in an array-constructor must have the same type and kind type parameters. [VOY]
p=reshape([0.0d0,vOx,vOy,vOz],[4,1])
------------------------^
src/acl_utils.f90(94): error #8209: If type specification is omitted, each element in an array-constructor must have the same type and kind type parameters. [VOZ]
p=reshape([0.0d0,vOx,vOy,vOz],[4,1])
----------------------------^
...
src/acl_elem.f90(620): error #6633: The type of the actual argument differs from the type of the dummy argument. [0.0D0]
Call QuatRot(txtmp,tytmp,tztmp,theta,nrx,nry,nrz,0.0d0,0.0d0,0.0d0,vrx,vry,vrz)
---------------------------------------------------------^
src/acl_elem.f90(620): error #6633: The type of the actual argument differs from the type of the dummy argument. [0.0D0]
Call QuatRot(txtmp,tytmp,tztmp,theta,nrx,nry,nrz,0.0d0,0.0d0,0.0d0,vrx,vry,vrz)
---------------------------------------------------------------^
src/acl_elem.f90(620): error #6633: The type of the actual argument differs from the type of the dummy argument. [0.0D0]
Call QuatRot(txtmp,tytmp,tztmp,theta,nrx,nry,nrz,0.0d0,0.0d0,0.0d0,vrx,vry,vrz)
---------------------------------------------------------------------^
...
it goes like this in several places inside acl_*
and with GCC also:
mpif90 -cpp -O3 -funroll-loops -floop-optimize -g -Warray-bounds -fcray-pointer -fbacktrace -ffree-line-length-none -fallow-argument-mismatch -I./src -I./decomp2d -cpp -O3 -funroll-loops -floop-optimize -g -Warray-bounds -fcray-pointer -fbacktrace -ffree-line-length-none -fallow-argument-mismatch -DVERSION=\"\" -c src/dynstall_legacy.f90
src/dynstall_legacy.f90:365:36:
365 | dCDF=KD*(CLstat-CLF)*sign(1.0d0,CLstat)
| 1
Error: ‘b’ argument of ‘sign’ intrinsic at (1) must be the same type and kind as ‘a’
src/dynstall_legacy.f90:378:19:
378 | if (sign(1.0d0,lb%dcv*lb%CLRefLE)<0 .OR. abs(alphaL-AOA0)>acut .OR. lb%CLRateFlag<0) then
| 1
Error: ‘b’ argument of ‘sign’ intrinsic at (1) must be the same type and kind as ‘a’
make: *** [Makefile:107: src/dynstall_legacy.o] Error 1
Hello,
In the subroutine avg3d
, when estimating the average of a 3D field, the cell at i=nx
is skipped when nclx1=1
:
Line 841 in e090f06
It seems odd that skipping the cell at i=nx
depends on the boundary condition at x=0
(nclx1
) and not on the boundary condition at x=Lx
(nclxn
) ?
I am running a cylinder case simulation on our HPC with 200 cores. The first time I ran simulation with 750x750x16 on 100x20 cores. The simulation also temrinated after running 40000 time-steps out of 120000. Going through the issue number #141, I modified the mesh to 1041 x 1040 x 20 with 20 x 10 cores based on the suggestions. This time also the simulation terminated at 70000 steps out of 600000. I believe there is some bug which is causing the simulation to crash at restart-points.
The error I got,
I am using openmpi 4.0.2 and gfortran 10.2.0 to compile and run the simulation. I have attached the input.i3d file.
inputi3d.txt
I am tying to run a simulation in our HPC. I am not able to compile the code. I get the error function 'is_contiguous' at (1) has no implicit type
in decomp2d/io.f90 file. The function is_contigous is not defined anywhere in the file. Interestingly, I didn't face this error when I compiled the code on my Laptop (Ubuntu).
Steps to reproduce the behavior:
module load compiler/openmpi/3.1.0/gnu
, then make clean
and make
to compile.As I mentioned earlier, I didn't face this error while compiling the code in my laptop. But facing the error while compiling it on the HPC.
A suggestion, users may not need to collect statistics for every step. We may add a variable to the program to let the statistics be calculated, for example, every 50 steps. This will save some computing resource.
Originally posted by @fangjian19 in #147 (comment)
Hi,
I have a question relating to the pressure output in the visualisation subroutines. In particular, this relates to the pressure that is output compared with the physical pressure. This may be a misunderstanding on my part, but Kim and Moin (1985) states that
Thanks in advance,
Matthew
Line 2015 in 9abede4
It is probably a bad idea to include LGPL code inside Incompact3d (BSD)
Describe the bug
This is from a debug build with the Intel OneAPI ifx compiler. While running the standard channel flow case, I pick up the following error at the beginning of the code:
forrtl: severe (189): LHS and RHS of an assignment statement have incompatible types
Image PC Routine Line Source
xcompact3d 000000000278A0AF Unknown Unknown Unknown
xcompact3d 000000000043084B alloc_x_real 32 alloc.inc
xcompact3d 00000000004595A9 init_variables 141 variables.f90
xcompact3d 000000000079E3B9 init_xcompact3d 211 xcompact3d.f90
xcompact3d 000000000079DBB9 init_xcompact3d_. 0 xcompact3d.f90
xcompact3d 000000000079D228 xcompact3d 51 xcompact3d.f90
xcompact3d 000000000040F74D Unknown Unknown Unknown
libc-2.28.so 00001462526A5D85 __libc_start_main Unknown Unknown
xcompact3d 000000000040F66E Unknown Unknown Unknown
This is the v4.0 code.
List of updates / fix needed
Code should be updated to only print one restart.info file for consistency
Line 5198 in e8f90c7
Describe the bug
Compilation of the master branch (commit 34d69d2) results in the following compilation error:
This error is caused be the fact that BC-Pipe-flow.f90
was not included in the src/CMakeLists.txt file (probably just forgotten during last commit). Adding BC-Pipe-flow.f90
to the CMakeLists.txt file fix the problem
I want to report two issues regarding the dynamic Smagorinsky model for you to consider.
Describe the bug
There is a compiling error when ADIOS2 support is switched on. This applies to both v4.0 and the latest master branch.
mpif90 -cpp -O3 -funroll-loops -floop-optimize -g -Warray-bounds -fcray-pointer -fbacktrace -ffree-line-length-none -fallow-argument-mismatch -DADIOS2 -I./src -I./decomp2d -cpp -O3 -funroll-loops -floop-optimize -g -Warray-bounds -fcray-pointer -fbacktrace -ffree-line-length-none -fallow-argument-mismatch -DDOUBLE_PREC -DVERSION=\"v4.0-249-g8d7b078\" -DADIOS2_USE_MPI -I/path/to/adios2/include/adios2/fortran -c decomp2d/io.f90
decomp2d/io.f90:139:87:
139 | call adios2_init(adios, trim(config_file), MPI_COMM_WORLD, adios2_debug_mode, ierror)
| 1
Error: There is no specific subroutine for the generic 'adios2_init' at (1)
make: *** [Makefile:110: decomp2d/io.o] Error 1
To Reproduce
make IO=adios2 ADIOS2DIR=/path/to/adios2 #with MPI wrapper in $PATH
Describe the bug
Running an example with turbulence modelling enabled (e.g. Smagorinsky) generates a file turb-data
, however it should generate turb-data/nut_smag-X.bin
for output step X
.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Described above.
Line 338 in 1c010da
The +1
is not recommended when the domain is periodic in a given direction, since it produces some inconsistencies with dxraf
, dyraf
and dzraf
at subroutine gene_epsi_3D.
I have the solution at my hackathon branch:
!complex_geometry
- nxraf=(nxm)*nraf+1;nyraf=(nym)*nraf+1;nzraf=(nzm)*nraf+1
+ nxraf=(nxm)*nraf
+ if (.not.nclx) nxraf=nxraf+1
+ nyraf=(nym)*nraf
+ if (.not.ncly) nyraf=nyraf+1
+ nzraf=(nzm)*nraf
+ if (.not.nclz) nzraf=nzraf+1
I will do a pull request with it soon.
I am testing a cylinder-case simulation with 2 cores on laptop before I run it on the HPC using 200 cores (100x2) for any divergence issues. But the code is terminating on its own. I get the following error.
Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted.
mpirun noticed that process rank 0 with PID 0 on node gauravVostro exited on signal 9 (Killed).
I used the command mpirun -np 2 xcompact3d
. I have tried with mesh size 800x800x16 and 900x900x16. Both produce the same error. I have tried lowering the time-step. But same issue. I have attached the input.i3d file. Please help me on this error.
This will enable checking that the user has set the correct time to restart from.
This is certainly possible in ADIOS2, with MPIIO???
The precision should be 4 for a single precision run
Line 542 in 11bd7c1
The subroutine gene_epsi_3D
was integrated into the main code as part of release 2.0. It came with two benefits, no more need for pre-processing, besides the performance and scalability with 2decomp. However, it demands large use of memory, mainly for the arrays xepsi
, yepsi
and zepsi
.
In order to improve memory usage, I bring a few suggestions:
iibm .eq. 2
, and preferable, do not have more than one allocated at the same time;Let me know what do you think about it, I could work on it.
It is also possible to produce the geometry externally with xcompact3d-toolbox.genepsi , but some bugs have to be solved (#3)
Hi All
can you elaborate on how to visualize the output (field data)? I see a data folder with velocity components and vorticity such as vort-309.bin Do we have to convert them to a format recognizable by paraview?
Thanks
Describe the bug
I am running Xcompact3d code in a cluster using 96 processors. When the code starts to write a backup file, the code finishes with an error in "rename" subroutine.
Here is the message at the final of the simulation
mpiexec detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[46279,1],0]
Exit code: 2
! xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
And this is the error:
! xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
/var/spool/slurmd/job51286/slurm_script: line 0: unalias: cc: not found
Fortran runtime error: EXECUTE_COMMAND_LINE: Invalid command line
Error termination. Backtrace:
#0 0x7f77d67988d9 in set_cmdstat
at ../.././libgfortran/intrinsics/execute_command_line.c:58
#1 0x7f77d6798ab6 in set_cmdstat
at ../.././libgfortran/intrinsics/execute_command_line.c:89
#2 0x7f77d6798ab6 in execute_command_line
at ../.././libgfortran/intrinsics/execute_command_line.c:112
#3 0x86a792 in rename
at src/tools.f90:1027
#4 0x86a792 in __tools_MOD_restart
at src/tools.f90:254
#5 0x403017 in xcompact3d
at src/xcompact3d.f90:83
#6 0x403017 in main
at src/xcompact3d.f90:7
! xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This Error is strange to me, because it happens sometimes. For instance, if I set the frequency for writing a backup file to 200000, the xcompact3d code writes the backup file in 200 000 time steps, but it doesn't at 400000 time steps.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
I expect the backup file at each "icheckpoint", but the code doesn't write the backup file at some times.
Thank you so much for your help.
William
Make sure docs up to date re what files get created and their contents - this has changed with the ADIOS2 integration
The typo doesn't affect the running of the code at all. But as I have found it, I would like to report it.
On line 410 of src/parameters.f90, the code reads
if (ilesmod.ne.0) then
write(,) ' :DNS'
I think when ilesmod equals 0, the simulation is DNS. So the if sentence should be
if(ilesmod.eq.0) then
write(,)' :DNS'
Please check.
Hello, I am trying to import a custom geomtery in the cylinder case by reading the coordinates of domain inside the geometry from a dat file and setting them equal to remp manually. When I compile and run the code using openmpi -np 2 xcompact3d
, It starts the procedure for generation of the geometry but crashes giving the error.
The modification I did in the BC-Cylinder.f90. I have commented the do loops which generated the cylinder and added the following code. I have attached the epsilondat.txt file for your reference.
open(2, file = 'src/epsilon.dat', status = 'unknown')
read(2,*) len
do i=1,len
read(2,*) ex,ey,ez
epsi(ex,ey,ez)=remp
enddo
close(2)
Error
[login07:194055:0:194055] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x7fc61379ec98)
==== backtrace (tid: 194055) ====
0 0x00000000007152dc __cyl_MOD_geomcomplex_cyl() /scratch/19ae30032/flat_plate_low/src/BC-Cylinder.f90:85
1 0x00000000007bb155 __genepsi_MOD_gene_epsi_3d() /scratch/19ae30032/flat_plate_low/src/genepsi3d.f90:226
2 0x00000000007c0a9d __genepsi_MOD_genepsi3d() /scratch/19ae30032/flat_plate_low/src/genepsi3d.f90:134
3 0x00000000007c173c init_xcompact3d_() /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:195
4 0x0000000000402b8f xcompact3d() /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:23
5 0x0000000000402b8f main() /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:7
6 0x0000000000022555 __libc_start_main() ???:0
7 0x0000000000403119 _start() ???:0
=================================
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7fc6868cd62f in ???
#1 0x7152dc in __cyl_MOD_geomcomplex_cyl
at src/BC-Cylinder.f90:85
#2 0x7bb154 in __genepsi_MOD_gene_epsi_3d
at src/genepsi3d.f90:226
#3 0x7c0a9c in __genepsi_MOD_genepsi3d
at src/genepsi3d.f90:134
#4 0x7c173b in init_xcompact3d_
at src/xcompact3d.f90:195
#5 0x402b8e in xcompact3d
at src/xcompact3d.f90:23
#6 0x402b8e in main
at src/xcompact3d.f90:7
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node login07 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
[login07:194046] 1 more process has sent help message help-mpi-btl-openib.txt / error in device init
[login07:194046] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
While debugging, the noticed that the subroutine geomcomplex_cyl, is being called multiple times. Is it normal ? I am not able to figure it out why this is happening. Can you help in this ?
Thanks !
Gaurav
Describe the bug
TGV case does not run in DEBUG mode
To Reproduce
Steps to reproduce the behavior:
***** Using the generic FFT engine *****
mpirun noticed that process rank 0 with PID 0 on node turbulence exited on signal 11 (Segmentation fault).
The file xcompact3d.f90 contains case-specific open / close statements. This should be moved inside the corresponding init and finalize subroutines in BC-xxx.f90.
Incompact3d/src/xcompact3d.f90
Line 264 in ca5be84
Incompact3d/src/xcompact3d.f90
Line 292 in ca5be84
However, the case/init subroutine is called only when irestart==0. It might be relevant to adapt the initial stage of xcompact3d. For instance :
The case-specific open / close statements could then be moved inside case/boot and case/finalize
I wish to know that whether the forces.f90 module works for both moving objects and the stationary objects or only for the later one. Could someone please comment on it?
When I tried to calculate the forces acting on a transversely oscillating Cylinder (Forced Oscillation) I've observed the Spurious oscillations in the Coefficient of lift and Coefficient of Drag as shown below
The paper titled "A simple and scalable immersed boundary method for high-fidelity simulations of fixed and moving objects on a Cartesian mesh" mentions the use of periodic boundary condition in y-direction along with stretching.
Extract from the paper,
A resolution of nx × ny = 385 × 384 is used, with a stretched mesh in the vertical direction towards the centre of the domain with the smallest mesh spacing being ymin = 0.04
. Initial conditions are the same as in the previous section except for the boundary conditions. A uniform velocity u∞ = 1.0 is imposed at the inlet while a 1D-convection equation is imposed at the outlet. Periodic boundary conditions are imposed in the vertical direction.
But when setting the case using the examples input.i3d with the following values of the parameters,
nx=301 ! X-direction nodes
ny=301 ! Y-direction nodes
nz=8 ! Z-direction nodes
istret = 1 ! y mesh refinement (0:no, 1:center, 2:both sides, 3:bottom)
beta = 0.259065151 ! Refinement parameter (beta)
! Domain
xlx = 16. ! Lx (Size of the box in x-direction)
yly = 16. ! Ly (Size of the box in y-direction)
zlz = 2. ! Lz (Size of the box in z-direction)
! Boundary conditions
nclx1 = 2
nclxn = 2
ncly1 = 0
nclyn = 0
nclz1 = 0
nclzn = 0
iibm = 3.
Is this a bug or whether it needs to be implemented manually ?
Hello, I am trying to import a custom geomtery in the cylinder case by reading the coordinates of domain inside the geometry from a dat file and setting them equal to remp manually. When I compile and run the code using openmpi -np 2 xcompact3d
, It starts the procedure for generation of the geometry but crashes giving the error.
The modification I did in the BC-Cylinder.f90. I have commented the do loops which generated the cylinder and added the following code. I have attached the epsilondat.txt file for your reference.
open(2, file = 'src/epsilon.dat', status = 'unknown')
read(2,*) len
do i=1,len
read(2,*) ex,ey,ez
epsi(ex,ey,ez)=remp
enddo
close(2)
Error
[login07:194055:0:194055] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x7fc61379ec98)
==== backtrace (tid: 194055) ====
0 0x00000000007152dc __cyl_MOD_geomcomplex_cyl() /scratch/19ae30032/flat_plate_low/src/BC-Cylinder.f90:85
1 0x00000000007bb155 __genepsi_MOD_gene_epsi_3d() /scratch/19ae30032/flat_plate_low/src/genepsi3d.f90:226
2 0x00000000007c0a9d __genepsi_MOD_genepsi3d() /scratch/19ae30032/flat_plate_low/src/genepsi3d.f90:134
3 0x00000000007c173c init_xcompact3d_() /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:195
4 0x0000000000402b8f xcompact3d() /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:23
5 0x0000000000402b8f main() /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:7
6 0x0000000000022555 __libc_start_main() ???:0
7 0x0000000000403119 _start() ???:0
=================================
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7fc6868cd62f in ???
#1 0x7152dc in __cyl_MOD_geomcomplex_cyl
at src/BC-Cylinder.f90:85
#2 0x7bb154 in __genepsi_MOD_gene_epsi_3d
at src/genepsi3d.f90:226
#3 0x7c0a9c in __genepsi_MOD_genepsi3d
at src/genepsi3d.f90:134
#4 0x7c173b in init_xcompact3d_
at src/xcompact3d.f90:195
#5 0x402b8e in xcompact3d
at src/xcompact3d.f90:23
#6 0x402b8e in main
at src/xcompact3d.f90:7
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node login07 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
[login07:194046] 1 more process has sent help message help-mpi-btl-openib.txt / error in device init
[login07:194046] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
While debugging, the noticed that the subroutine geomcomplex_cyl, is being called multiple times. Is it normal ? I also printed the value of nxi,nxf, nyi, nyf, nzi,nzf, it keeps changing on its own.
I am not able to figure it out why this is happening. Can you help in this ?
Thanks !
Gaurav
Is your feature request related to a problem? Please describe.
Currently, the CI is only performing tests with the Makefile.
Describe the solution you'd like
The CI could also try to compile the code with CMake
Describe alternatives you've considered
N/A.
Additional context
Issue coming from #222
I meet some problems that I cannot figure out. I want to use the incompact3d code to simulate flow around a 3D square cylinder. I use the "cylinder" module and then my domain is xlx=24,yly=8.zlz=3. The length and width of the square cylinder are 2 and 1,respectively. The cex and cey of the square cylinder are 8 and 1, respectively. The X left and right for the volume control are 7.0 and 9.0, respectively. The Y bottom and top for the volume control are 0 and 2.5, respectively. But there are some mistakes. After the caculation, the outcome is the following picture. The vort except the square cylinder is no changing, it is always close to zero. The inflow and outflow speed are 0.05, and the Re is 300.
The changes in the code has been put in the file in the terms of screenshots.
新建 Microsoft Word 文档 (3).docx
Following several requests, it would be great to add a new example with the 3D periodic flow over a periodic 2D hill.
Describe the bug
When attempting to simulate immersed objects with z normals, the calculated z boundaries of the solid are off by a factor of the parameter nraf - for example when I attempt to simulate a box with boundaries [2.5,3.5],[2.5,3.5],[2.5,3.5], the values stored in xi(),xf() etc are [2.5,3.5],[2.5,3.5],[0.25,0.35] if nraf = 10, or [2.5,3.5],[2.5,3.5],[1.25,1.75] if nraf = 2. (I printed these out when they are called in cubsplx, cubsply, cubsplz - in ibm.f90) I only caught this because these incorrect values lead to index bounds errors later in the code. Of course, if nraf=1 then the problem does not manifest, but I have a very coarse boundary which I would like to be able to improve. I have confirmed that the calculation of the epsi field is correct by visualising it directly - it behaves as it should.
It seems clear that there is a missing multiple of nraf somewhere in the z calculation, but I have been unable to find any inconsistencies.
To Reproduce
This can be replicated just be replacing the setup in BC-Cylinder.f90 with a different shape function and then running the stock input file with iibm=3. I have attached both files that will replicate the error. It works on the most current version of Incompact3d.
Steps to reproduce the behavior:
Expected behavior
The program should run without issue (infact if you run with all optimisations, sometimes the bounds error doesn't seem to cause any major issues, and the solutions look ok).
I've attached a screenshot of the readouts that confirmed what I have so far.
BC-Cylinder.f90.txt
input.i3d.txt
Thank you very much for your time.
I am implementing a temporally developing turbulent boundary layer following the method of Kozul et al. 2016. The basis of this approach uses a fixed wall at the top of the computation domain modelled as a no-slip wall and a uniform moving wall at the bottom and periodic BCs in the other directions. The Poisson equation uses Neuman BC in the wall-normal directions (same as doubly periodic channel flow). This method works fine if istret=2
or istret=0
is used but fails after less than 10 timesteps if istret=3
is used. I have reproduced this on the normal channel flow case with modified versions of the channel flow example on the latest commit. I am unsure if this is a limitation of istret=3, an issue with my case setup or a bug. The branch for the temporal boundary layer is found here. It is also noticeable that the divergence of velocity is substantially degraded when using istret=3
for example in turbulent boundary layer cases div U max and mean are ~10^{-7} whereas when using istret=0 or istret=2 it ~10^{-14}.
git clone -b master https://github.com/xcompact3d/Incompact3d.git .
sed -i '30 i \\t\t BC-Cavity.f90' src/CMakeLists.txt
(This file is missing from the CMakeLists.txt file)mkdir build && cd build
module load cmake
module load PrgEnv-gnu
cmake -S .. -B .
make
Let m know if you need more information.
Describe the bug
A clear and concise description of what the bug is.
A possible bug in reading statistics when restart.
The mean flow statistics files were overwritten before reading previous mean flow field.
the program calls xcompact3d -> postprocessing -> overall_statistic
In the beginning of subroutine overall_statistic, it reads:
if (itime.lt.initstat) then
return
elseif (itime.eq.initstat) then
call init_statistic()
elseif (itime.eq.ifirst) then
call restart_statistic()
endif
For a restart case, itime>initstat and itime=ifirst-1, which will lead to the statistics being calculated and overwritten before reading a previous field.
This will lead to the statistics not correctly calculated.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
It appears to be out of date - e.g. flags that no longer exist
Hi,
Deallocate statements are missing in the 3D FFT subroutines.
Incompact3d/decomp2d/fft_common_3d.inc
Line 37 in 687b5c1
Incompact3d/decomp2d/fft_common_3d.inc
Line 79 in 687b5c1
Incompact3d/decomp2d/fft_common_3d.inc
Line 190 in 687b5c1
Incompact3d/decomp2d/fft_common_3d.inc
Line 217 in 687b5c1
The gravitational term is not computed if ilmn .eq. False
(transeq.f90):
if (ilmn) then
!! Gravity
if ((Fr**2).gt.zero) then
call momentum_gravity(dux1, duy1, duz1, rho1(:,:,:,1) - one, one / Fr**2)
endif
do is = 1, numscalar
call momentum_gravity(dux1, duy1, duz1, phi1(:,:,:,is), ri(is))
enddo
endif
That is a bug for the proper simulation of gravity currents.
The correction was proposed in fschuch/Xcompact3d#6. I will now include it in the main branch.
Hello,
I think there is a typo in the init_xcompact3d
subroutine in xcomapact3d.f90
. genepsi3d
and body
both the subroutines are being called for iibm==3
. Although, it doesn't matter much, since genepsi3d
will be called always.
if ((iibm.eq.2).or.(iibm.eq.3)) then
call genepsi3d(ep1)
else if ((iibm.eq.1).or.(iibm.eq.3)) then
call body(ux1,uy1,uz1,ep1)
endif
real(mytype),dimension(xsize(1),xsize(2),xsize(3),numscalar) :: T ! FIXME This can be huge
to the ABL module and allocate it at the beginning of the simulation ?
How about we make it allocatable and allocate it only in the ABL case?
I don't really understand why it doesn't just use the normal scalar variables for this and something like scalar_index_T
to identify which scalar to use, however I think this should be a separate work as getting rid of the memory usage in the general case is immediately useful.
Originally posted by @pbartholomew08 in #176 (comment)
After changing examples/adios2_config.xml
and running make install
the adios2_config.xml
in the installed examples is updated, but not under the Tests directory - causing tests to fail.
Current resolution is to rerun cmake - is this the correct behaviour? could make install
also update the Test directory?
It appears that the restart does not work properly for large meshes using the MPI-IO build on ARCHER2. I have been using a fork of Incompact3D developed for implementing the recycling-rescaling method for spatially accelerating turbulent boundary layer method. However, I have reproduced it using the already implemented turbulent boundary layer case on Incompact3D. The issue occurs on large meshes (the one tested is at around 600M elements) and only after a restart where it appears that the data read in from the checkpoint file is incorrect notably for the velocity field although it seems the pressure is fine. I have reproduced the problem using a version of master branch modified to output the data immediately after the call restart
. This was tested on ARCHER2 due to the large size of the mesh so this could be a problem with my environment but I have tested using different compilers with both O2 and O3 optimisation flags. The instructions to reproduce the problems are below although the issues should be apparent at least on commit (SHA: 723961c (02/08/2022)) with the maximum velocities appearing very different to the previously run simulation.
A modified version of commit 723961c has been tested such that the restarted simulations are immediately outputted after the call restart
command. The program is then exited. The modified code can be found on branch master_testing of the Incompact3D fork. This should mean that the results show what has been restarted from file.
I have attached the output of the git diff
command to indicate how this has been changed from master which is the same as commit SHA 723961c (git_diff.log)
mkdir Incompact3D && cd Incompact3D
git init
git remote add origin https://github.com/mattfalcone1997/Incompact3d.git
git pull origin master_testing
mkdir build && cd build
cmake -S .. -B .
cmake -S .. -B . -DUSE_ADIOS2=ON -DADIOS2_ROOT="adios2_build_path"
make
Output files are attached. clear differences were observed in u, v , w maximum values after the restart and the output data seems to be wrong (images are attached).
I expect the simulation to restart and continue as it was before the restart
I have also tried with different compilers and I reduced the level of optimisation to O2 to no effect. I have not exhaustively tested the issue and the problem may well be related to my setup at compile or run time.
I have attempted to give as much information as I could but let me know if you have any questions
Thanks in advance,
Matthew Falcone
Describe the bug
I suggest adding the topics navier-stokes
, computational-fluid-dynamics
, cfd
, large-eddy-simulation
in the About section.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.