GithubHelp home page GithubHelp logo

xcompact3d / incompact3d Goto Github PK

View Code? Open in Web Editor NEW
132.0 12.0 125.0 174.76 MB

Current CPU version of our solver for the Navier-Stokes equations

Home Page: https://xcompact3d.readthedocs.io/en/latest/

License: BSD 3-Clause "New" or "Revised" License

Fortran 98.73% Emacs Lisp 0.02% Shell 0.04% Python 0.07% CMake 1.14%
navier-stokes computational-fluid-dynamics cfd direct-numerical-simulation large-eddy-simulation

incompact3d's People

Contributors

admole avatar airwarriorg91 avatar arahamz avatar cfd-xing avatar cjaneippel avatar etwll avatar fangjian19 avatar fschuch avatar gdeskos avatar leormonteiro avatar mathrack avatar nasos94 avatar nbeb avatar pbartholomew08 avatar rfj82982 avatar ricardofrantz avatar rvicentecruz avatar shykafer avatar slaizet avatar tlestang avatar vcz18385 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

incompact3d's Issues

Lock-exchange is not working when ilmn is False

The lock-exchange flow configuration is not working when ilmn is False.

We get this error:

 Xcompact3d is run with the default file -->input.i3d                                                                       
 ===========================================================
 ======================Xcompact3D===========================
 ===Copyright (c) 2018 Eric Lamballais and Sylvain Laizet===
 ===Modified by Felipe Schuch and Ricardo Frantz============
 ===Modified by Paul Bartholomew, Georgios Deskos and=======
 ===Sylvain Laizet -- 2018- ================================
 ===========================================================
 Git version        : v3.0-397-gff531df
 ===========================================================
 Simulating lock-exchange
 ===========================================================
 Reynolds number Re     :          2236.000
 xnu                    :        0.00044723
 ===========================================================
 p_row, p_col           :         0       0
 ===========================================================
 Time step dt           :        0.00480000
 Temporal scheme        :    Adams-bashforth 2
 ===========================================================
 ifirst                 :                 1
 ilast                  :            100000
 ===========================================================
 Lx                     :       18.00000000
 Ly                     :        2.00000000
 Lz                     :        2.00000000
 nx                     :               181
 ny                     :                29
 nz                     :                27
 ===========================================================
 istret                 :                 0
 beta                   :        0.25906515
 ===========================================================
 nu0nu                  :        4.00000000
 cnu                    :        0.44000000
 ===========================================================
 Scalar                 :               off
 numscalar              :                 0
 ===========================================================
 spinup_time            :                 0
 wrotation              :        0.00000000
 ===========================================================
 Immersed boundary      :               off
 ===========================================================
 Boundary condition velocity field: 
 nclx1, nclxn           :               1,1
 ncly1, nclyn           :               2,1
 nclz1, nclzn           :               1,1
 ===========================================================
 Numerical precision: Double
 ===========================================================
 High and low speed : u1=  2.00 and u2=  1.00
 Gravity vector     : (gx, gy, gz)=(     0.00000000,    -1.00000000,     0.00000000)
  
 Initial front location:    1.0000000000000000     
 ===========================================================
 In auto-tuning mode......
 factors:            1
 processor grid           1  by            1  time=   3.2067000000068901E-003
 the best processor grid is probably            1  by            1
 Initializing variables...
 Using the hyperviscous operator with (nu_0/nu,c_nu) = (   4.0000000000000000      ,  0.44000000000000000      )
 Using the hyperviscous operator with (nu_0/nu,c_nu) = (   4.0000000000000000      ,  0.44000000000000000      )
 Using the hyperviscous operator with (nu_0/nu,c_nu) = (   4.0000000000000000      ,  0.44000000000000000      )
 ===========================================================
 Visu module requires   0.267573029     GB
 ===========================================================
 ===========================================================
 Diffusion number
 cfl_diff_x             :           0.00021467
 cfl_diff_y             :           0.00042075
 cfl_diff_z             :           0.00036279
 cfl_diff_sum           :           0.00099821
 ===========================================================

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
#0  0x7f07f6c802ed in ???
#1  0x7f07f6c7f503 in ???
#2  0x7f07f62fd03f in ???
#3  0x7f07f644cbe2 in ???
#4  0x55a8f26b4b5c in __lockexch_MOD_set_fluid_properties_lockexch
        at src/BC-Lock-exchange.f90:754
#5  0x55a8f26b4b5c in __lockexch_MOD_init_lockexch
        at src/BC-Lock-exchange.f90:175
#6  0x55a8f277a554 in __case_MOD_init
        at src/case.f90:93
#7  0x55a8f27e81fe in init_xcompact3d_
        at src/xcompact3d.f90:237
#8  0x55a8f2402918 in xcompact3d
        at src/xcompact3d.f90:50
#9  0x55a8f2402918 in main
        at src/xcompact3d.f90:35
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node DESKTOP-RD7V3T4 exited on signal 11 (Segmentation fault).

Reproduce the error

Take the Lock-exchange/input.i3d and change:

- ilmn = .TRUE.         ! Enable low Mach number
+ ilmn = .FALSE.         ! Enable low Mach number

Then run the simulation.

Proposed solution

I see two options:

  1. Allocate mu1 anyway. Right now the code at src/variables.f90 is:

    if (ilmn) then
        call alloc_x(mu1)
        mu1(:,:,:) = one
    endif
  2. Or protect references to mu1 at src/BC-Lock-exchange.f90 with if (ilmn) then.

Requirement for a proper documentation of the software.

I personally feel, there is a great need of proper documentation of the software for beginners. The current documentation can be expanded to cover all the new aspects and FAQs. More information about the statistics and visu output is required. This is just a suggestion. If required I can volunteer for the same.

Thanks,
Gaurav

Fortran format not wide enough to display output

This is an output from the example wind turbine ADM simulation:

 Time step =  39975/ 400000, Time unit =9993.7500
......
===========================================================
 Time step =  40000/ 400000, Time unit =*********

As seen the Time unit field becomes too large to display.

This is the corresponding source code:

tools.f90:          write(*,"(' Time step =',i7,'/',i7,', Time unit =',F9.4)") itime,ilast,t

Implementation pipe flow

Hello,

As discussed during the last showcase event (27-28 April 2023, Imperial College London), it would be interesting to implement the pipe flow into the main branch of the code, given its academic nature and importance among the canonical wall-bounded flows. Through my thesis, I have worked on the full implementation of this type of flow in Xcompact3d while including heat transfer with the pipe geometry being represented by the Lagrange polynomial immersed boundary method iibm=2.

I started the implementation of the pipe flow in 2019, starting from the version of the code available in the main branch at that time. As the ultimate goal of my thesis was the introduction of a IB-based numerical strategy for Conjugate Heat Transfer simulations, many developments were made throughout these 4 years, including viscous filtering and fluid-solid thermal coupling for complex geometry. This version of the code is fully operational and I believe it can serve as a basis to guide the implementation in the main branch.

As we discussed during the showcase event, perhaps we could start with the basic implementation/validation of the velocity solution alone as a first step (geometry creation in the frame of IBM, laminar/turbulent initial conditions, ...) before moving on to the heat transfer implementation. Naturally, I'm willing to restart from the basics and advance step-by-step, according to the guidelines you provide to implement this flow configuration.

Kind regards,
Rodrigo Vicente Cruz

Compiling in single precision failing

Hello! Trying to compile in single with Intel or GCC and it's impossible.

mpiifort -fpp -O3 -ipo -fp-model fast=2 -mcmodel=large -safe-cray-ptr -I/lib -qmkl -I./src -I./decomp2d -fpp -O3 -ipo -fp-model fast=2 -mcmodel=large -safe-cray-ptr -I/lib -qmkl -DVERSION=\"\"  -I/opt/intel/oneapi/mkl/2021.4.0/include -c src/ibm.f90
src/ibm.f90(533): error #6633: The type of the actual argument differs from the type of the dummy argument.   [XA]
		            call cubic_spline(xa,ya,na,xpol,ypol)
----------------------------------------------^
src/ibm.f90(533): error #6633: The type of the actual argument differs from the type of the dummy argument.   [YA]
		            call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------^
src/ibm.f90(533): error #6633: The type of the actual argument differs from the type of the dummy argument.   [XPOL]
		            call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------------^
src/ibm.f90(533): error #6633: The type of the actual argument differs from the type of the dummy argument.   [YPOL]
		            call cubic_spline(xa,ya,na,xpol,ypol)
------------------------------------------------------------^
src/ibm.f90(697): error #6633: The type of the actual argument differs from the type of the dummy argument.   [XA]
		            call cubic_spline(xa,ya,na,xpol,ypol)
----------------------------------------------^
src/ibm.f90(697): error #6633: The type of the actual argument differs from the type of the dummy argument.   [YA]
		            call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------^
src/ibm.f90(697): error #6633: The type of the actual argument differs from the type of the dummy argument.   [XPOL]
		            call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------------^
src/ibm.f90(697): error #6633: The type of the actual argument differs from the type of the dummy argument.   [YPOL]
		            call cubic_spline(xa,ya,na,xpol,ypol)
------------------------------------------------------------^
src/ibm.f90(850): error #6633: The type of the actual argument differs from the type of the dummy argument.   [XA]
	                          call cubic_spline(xa,ya,na,xpol,ypol)
----------------------------------------------------^
src/ibm.f90(850): error #6633: The type of the actual argument differs from the type of the dummy argument.   [YA]
	                          call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------------^
src/ibm.f90(850): error #6633: The type of the actual argument differs from the type of the dummy argument.   [XPOL]
	                          call cubic_spline(xa,ya,na,xpol,ypol)
-------------------------------------------------------------^
src/ibm.f90(850): error #6633: The type of the actual argument differs from the type of the dummy argument.   [YPOL]
	                          call cubic_spline(xa,ya,na,xpol,ypol)
------------------------------------------------------------------^
....
src/acl_utils.f90(94): error #8209: If type specification is omitted, each element in an array-constructor must have the same type and kind type parameters.   [VOX]
   p=reshape([0.0d0,vOx,vOy,vOz],[4,1])
--------------------^
src/acl_utils.f90(94): error #8209: If type specification is omitted, each element in an array-constructor must have the same type and kind type parameters.   [VOY]
   p=reshape([0.0d0,vOx,vOy,vOz],[4,1])
------------------------^
src/acl_utils.f90(94): error #8209: If type specification is omitted, each element in an array-constructor must have the same type and kind type parameters.   [VOZ]
   p=reshape([0.0d0,vOx,vOy,vOz],[4,1])
----------------------------^
...
src/acl_elem.f90(620): error #6633: The type of the actual argument differs from the type of the dummy argument.   [0.0D0]
        Call QuatRot(txtmp,tytmp,tztmp,theta,nrx,nry,nrz,0.0d0,0.0d0,0.0d0,vrx,vry,vrz)
---------------------------------------------------------^
src/acl_elem.f90(620): error #6633: The type of the actual argument differs from the type of the dummy argument.   [0.0D0]
        Call QuatRot(txtmp,tytmp,tztmp,theta,nrx,nry,nrz,0.0d0,0.0d0,0.0d0,vrx,vry,vrz)
---------------------------------------------------------------^
src/acl_elem.f90(620): error #6633: The type of the actual argument differs from the type of the dummy argument.   [0.0D0]
        Call QuatRot(txtmp,tytmp,tztmp,theta,nrx,nry,nrz,0.0d0,0.0d0,0.0d0,vrx,vry,vrz)
---------------------------------------------------------------------^
...

it goes like this in several places inside acl_*

and with GCC also:

mpif90 -cpp -O3 -funroll-loops -floop-optimize -g -Warray-bounds -fcray-pointer -fbacktrace -ffree-line-length-none -fallow-argument-mismatch -I./src -I./decomp2d -cpp -O3 -funroll-loops -floop-optimize -g -Warray-bounds -fcray-pointer -fbacktrace -ffree-line-length-none -fallow-argument-mismatch -DVERSION=\"\"   -c src/dynstall_legacy.f90
src/dynstall_legacy.f90:365:36:

  365 |     dCDF=KD*(CLstat-CLF)*sign(1.0d0,CLstat)
      |                                    1
Error: ‘b’ argument of ‘sign’ intrinsic at (1) must be the same type and kind as ‘a’
src/dynstall_legacy.f90:378:19:

  378 |     if (sign(1.0d0,lb%dcv*lb%CLRefLE)<0 .OR. abs(alphaL-AOA0)>acut .OR. lb%CLRateFlag<0) then
      |                   1
Error: ‘b’ argument of ‘sign’ intrinsic at (1) must be the same type and kind as ‘a’
make: *** [Makefile:107: src/dynstall_legacy.o] Error 1


Volume average in avg3d

Hello,

In the subroutine avg3d, when estimating the average of a 3D field, the cell at i=nx is skipped when nclx1=1 :

if (nclx1==1.and.xend(1)==nx) then

It seems odd that skipping the cell at i=nx depends on the boundary condition at x=0 (nclx1) and not on the boundary condition at x=Lx (nclxn) ?

Verification of restart can crash the simulation

I am running a cylinder case simulation on our HPC with 200 cores. The first time I ran simulation with 750x750x16 on 100x20 cores. The simulation also temrinated after running 40000 time-steps out of 120000. Going through the issue number #141, I modified the mesh to 1041 x 1040 x 20 with 20 x 10 cores based on the suggestions. This time also the simulation terminated at 70000 steps out of 600000. I believe there is some bug which is causing the simulation to crash at restart-points.
The error I got,
image

I am using openmpi 4.0.2 and gfortran 10.2.0 to compile and run the simulation. I have attached the input.i3d file.
inputi3d.txt

Error while compiling Xcompact3D in HPC

I am tying to run a simulation in our HPC. I am not able to compile the code. I get the error function 'is_contiguous' at (1) has no implicit type in decomp2d/io.f90 file. The function is_contigous is not defined anywhere in the file. Interestingly, I didn't face this error when I compiled the code on my Laptop (Ubuntu).

Steps to reproduce the behavior:

  1. Git cloned the latest version (commit #4949f03).
  2. Ran module load compiler/openmpi/3.1.0/gnu, then make clean and make to compile.

As I mentioned earlier, I didn't face this error while compiling the code in my laptop. But facing the error while compiling it on the HPC.

Screenshot 2023-06-13 083627

Query about the output pressure

Hi,

I have a question relating to the pressure output in the visualisation subroutines. In particular, this relates to the pressure that is output compared with the physical pressure. This may be a misunderstanding on my part, but Kim and Moin (1985) states that $p = \phi + (\Delta t/2Re)\nabla^2 \phi$, and I believe the last bit comes from the use of the Crank-Nicolson Scheme in their paper. As a result, I was wondering if Xcompact3D takes the last part into account (or if it needs to) when outputting the pressure when implicit time schemes are used for the $y$ viscous terms.

Thanks in advance,
Matthew

runtime error with Intel ifx compiler

Describe the bug

This is from a debug build with the Intel OneAPI ifx compiler. While running the standard channel flow case, I pick up the following error at the beginning of the code:

forrtl: severe (189): LHS and RHS of an assignment statement have incompatible types
Image              PC                Routine            Line        Source
xcompact3d         000000000278A0AF  Unknown               Unknown  Unknown
xcompact3d         000000000043084B  alloc_x_real               32  alloc.inc
xcompact3d         00000000004595A9  init_variables            141  variables.f90
xcompact3d         000000000079E3B9  init_xcompact3d           211  xcompact3d.f90
xcompact3d         000000000079DBB9  init_xcompact3d_.           0  xcompact3d.f90
xcompact3d         000000000079D228  xcompact3d                 51  xcompact3d.f90
xcompact3d         000000000040F74D  Unknown               Unknown  Unknown
libc-2.28.so       00001462526A5D85  __libc_start_main     Unknown  Unknown
xcompact3d         000000000040F66E  Unknown               Unknown  Unknown

This is the v4.0 code.

TODO-List before merging hackathon2021 into master

List of updates / fix needed

  • Shall we read statistics when doing a restart ?
  • Avoid storing too many restart / statistics file
  • DONE. Check the new visualization files (vort/critq) should be written in /data and with the same format for consistency
  • TBL broken with scalars

BC-Pipe-flow.f90 not included in src/CMakeLists.txt

Describe the bug
Compilation of the master branch (commit 34d69d2) results in the following compilation error:
compilation_error

This error is caused be the fact that BC-Pipe-flow.f90 was not included in the src/CMakeLists.txt file (probably just forgotten during last commit). Adding BC-Pipe-flow.f90 to the CMakeLists.txt file fix the problem

bugfix

Issues with dynamic Smagorinsky model

I want to report two issues regarding the dynamic Smagorinsky model for you to consider.

  1. By default, the max dynamic Smagorinsky constant (maxdsmagcst) by is set to zero. If it is not specified in the input file, then the calculated turbulent viscosity would be zero throughout the computational domain, and thus the invalidate the les model. I suggest to use a relative larger number as default value for maxdsmagcst, say 0.14.
  2. The variables sxx1, syy1, szz1, sxy1, syz1 and sxz1 are not calculated before their use on line 615 of the file les_models.f90. Though their values are updated later on line 866 in the same subroutine by "call smag(nut1,ux1,uy1,uz1)", their value updates are lagged. A specific problem is that if the dynamic Smagorinsky model is adopted and if you are at the first step of the simulation, sxx1, syy1, szz1, sxy1, syz1 and sxz1 will always be zero. I suggest to move line 866 before line 615, which should be able to solve the problem.

compile error with ADIOS2 support on

Describe the bug

There is a compiling error when ADIOS2 support is switched on. This applies to both v4.0 and the latest master branch.

mpif90 -cpp -O3 -funroll-loops -floop-optimize -g -Warray-bounds -fcray-pointer -fbacktrace -ffree-line-length-none -fallow-argument-mismatch -DADIOS2 -I./src -I./decomp2d -cpp -O3 -funroll-loops -floop-optimize -g -Warray-bounds -fcray-pointer -fbacktrace -ffree-line-length-none -fallow-argument-mismatch -DDOUBLE_PREC -DVERSION=\"v4.0-249-g8d7b078\"   -DADIOS2_USE_MPI -I/path/to/adios2/include/adios2/fortran   -c decomp2d/io.f90
decomp2d/io.f90:139:87:

  139 |   call adios2_init(adios, trim(config_file), MPI_COMM_WORLD, adios2_debug_mode, ierror)
      |                                                                                       1
Error: There is no specific subroutine for the generic 'adios2_init' at (1)
make: *** [Makefile:110: decomp2d/io.o] Error 1

To Reproduce

make IO=adios2 ADIOS2DIR=/path/to/adios2 #with MPI wrapper in $PATH

Turbulence statistics are not written out properly

Describe the bug
Running an example with turbulence modelling enabled (e.g. Smagorinsky) generates a file turb-data, however it should generate turb-data/nut_smag-X.bin for output step X.

To Reproduce
Steps to reproduce the behavior:

  1. Git version 1ecd773
  2. Compilation options N/A
  3. Set-up Enable turbulence modelling (e.g. Smagorinsky)
  4. See error described above

Expected behavior
Described above.

Review nxraf, nyraf and nzraf

nxraf=(nxm)*nraf+1;nyraf=(nym)*nraf+1;nzraf=(nzm)*nraf+1

The +1 is not recommended when the domain is periodic in a given direction, since it produces some inconsistencies with dxraf, dyraf and dzraf at subroutine gene_epsi_3D.

I have the solution at my hackathon branch:

!complex_geometry
- nxraf=(nxm)*nraf+1;nyraf=(nym)*nraf+1;nzraf=(nzm)*nraf+1 
+ nxraf=(nxm)*nraf
+ if (.not.nclx) nxraf=nxraf+1
+ nyraf=(nym)*nraf
+ if (.not.ncly) nyraf=nyraf+1
+ nzraf=(nzm)*nraf
+ if (.not.nclz) nzraf=nzraf+1

I will do a pull request with it soon.

The simulation terminates automatically without any divergence.

I am testing a cylinder-case simulation with 2 cores on laptop before I run it on the HPC using 200 cores (100x2) for any divergence issues. But the code is terminating on its own. I get the following error.


Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted.


mpirun noticed that process rank 0 with PID 0 on node gauravVostro exited on signal 9 (Killed).

I used the command mpirun -np 2 xcompact3d. I have tried with mesh size 800x800x16 and 900x900x16. Both produce the same error. I have tried lowering the time-step. But same issue. I have attached the input.i3d file. Please help me on this error.

inputi3d.txt

Memory usage at gene_epsi_3D

The subroutine gene_epsi_3D was integrated into the main code as part of release 2.0. It came with two benefits, no more need for pre-processing, besides the performance and scalability with 2decomp. However, it demands large use of memory, mainly for the arrays xepsi, yepsi and zepsi.

In order to improve memory usage, I bring a few suggestions:

  1. Just allocate them just if iibm .eq. 2, and preferable, do not have more than one allocated at the same time;
  2. Since they are expected to be set just as zeros or ones, they could be changed to Boolean type (a huge reduction if compared to double or single float point).

Let me know what do you think about it, I could work on it.

It is also possible to produce the geometry externally with xcompact3d-toolbox.genepsi , but some bugs have to be solved (#3)

question

Hi All
can you elaborate on how to visualize the output (field data)? I see a data folder with velocity components and vorticity such as vort-309.bin Do we have to convert them to a format recognizable by paraview?
Thanks

Error in writing a backup file

Describe the bug
I am running Xcompact3d code in a cluster using 96 processors. When the code starts to write a backup file, the code finishes with an error in "rename" subroutine.

Here is the message at the final of the simulation

! xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Writing restart point restart0400000

Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.


mpiexec detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

Process name: [[46279,1],0]
Exit code: 2
! xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

And this is the error:

! xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
/var/spool/slurmd/job51286/slurm_script: line 0: unalias: cc: not found
Fortran runtime error: EXECUTE_COMMAND_LINE: Invalid command line

Error termination. Backtrace:
#0 0x7f77d67988d9 in set_cmdstat
at ../.././libgfortran/intrinsics/execute_command_line.c:58
#1 0x7f77d6798ab6 in set_cmdstat
at ../.././libgfortran/intrinsics/execute_command_line.c:89
#2 0x7f77d6798ab6 in execute_command_line
at ../.././libgfortran/intrinsics/execute_command_line.c:112
#3 0x86a792 in rename
at src/tools.f90:1027
#4 0x86a792 in __tools_MOD_restart
at src/tools.f90:254
#5 0x403017 in xcompact3d
at src/xcompact3d.f90:83
#6 0x403017 in main
at src/xcompact3d.f90:7
! xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

This Error is strange to me, because it happens sometimes. For instance, if I set the frequency for writing a backup file to 200000, the xcompact3d code writes the backup file in 200 000 time steps, but it doesn't at 400000 time steps.

To Reproduce
Steps to reproduce the behavior:

  1. Git version 1.7.1
  2. I am using the attached "Makefile" file
  3. I am using the attached "input.i3d" file for a discretization of 640x237x160 in x, y and z directions, respectively.

Expected behavior
I expect the backup file at each "icheckpoint", but the code doesn't write the backup file at some times.

InputMakefile.odt

Thank you so much for your help.

William

A small typo in the code

The typo doesn't affect the running of the code at all. But as I have found it, I would like to report it.
On line 410 of src/parameters.f90, the code reads

if (ilesmod.ne.0) then
write(,) ' :DNS'

I think when ilesmod equals 0, the simulation is DNS. So the if sentence should be
if(ilesmod.eq.0) then
write(,)' :DNS'

Please check.

Unable to read epsilon function from a dat.file

Hello, I am trying to import a custom geomtery in the cylinder case by reading the coordinates of domain inside the geometry from a dat file and setting them equal to remp manually. When I compile and run the code using openmpi -np 2 xcompact3d, It starts the procedure for generation of the geometry but crashes giving the error.

The modification I did in the BC-Cylinder.f90. I have commented the do loops which generated the cylinder and added the following code. I have attached the epsilondat.txt file for your reference.

    open(2, file = 'src/epsilon.dat', status = 'unknown')
    read(2,*) len
    do i=1,len
       read(2,*) ex,ey,ez
       epsi(ex,ey,ez)=remp
    enddo
    close(2)

Error

[login07:194055:0:194055] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x7fc61379ec98)
==== backtrace (tid: 194055) ====
 0 0x00000000007152dc __cyl_MOD_geomcomplex_cyl()  /scratch/19ae30032/flat_plate_low/src/BC-Cylinder.f90:85
 1 0x00000000007bb155 __genepsi_MOD_gene_epsi_3d()  /scratch/19ae30032/flat_plate_low/src/genepsi3d.f90:226
 2 0x00000000007c0a9d __genepsi_MOD_genepsi3d()  /scratch/19ae30032/flat_plate_low/src/genepsi3d.f90:134
 3 0x00000000007c173c init_xcompact3d_()  /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:195
 4 0x0000000000402b8f xcompact3d()  /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:23
 5 0x0000000000402b8f main()  /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:7
 6 0x0000000000022555 __libc_start_main()  ???:0
 7 0x0000000000403119 _start()  ???:0
=================================

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
#0  0x7fc6868cd62f in ???
#1  0x7152dc in __cyl_MOD_geomcomplex_cyl
        at src/BC-Cylinder.f90:85
#2  0x7bb154 in __genepsi_MOD_gene_epsi_3d
        at src/genepsi3d.f90:226
#3  0x7c0a9c in __genepsi_MOD_genepsi3d
        at src/genepsi3d.f90:134
#4  0x7c173b in init_xcompact3d_
        at src/xcompact3d.f90:195
#5  0x402b8e in xcompact3d
        at src/xcompact3d.f90:23
#6  0x402b8e in main
        at src/xcompact3d.f90:7
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node login07 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
[login07:194046] 1 more process has sent help message help-mpi-btl-openib.txt / error in device init
[login07:194046] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

While debugging, the noticed that the subroutine geomcomplex_cyl, is being called multiple times. Is it normal ? I am not able to figure it out why this is happening. Can you help in this ?

Thanks !
Gaurav

TGV case does not run in DEBG mode

Describe the bug
TGV case does not run in DEBUG mode

To Reproduce
Steps to reproduce the behavior:

  1. Git version: current master branch v4.0-114-ge56485b
  2. Compilation: options -DDEBG with gfortran compiler 11.2
  3. Set-up: TGV
  4. See error
    DIV U* max mean= 5.3237008523459193E-003 1.0803248978164840E-003

Solve Poisson before1 pp3 -1.6591250762536540E-008

Poisson11X Start rw2 -1.3312986680372684E-007

Poisson11X Start rw1 -6.2943711029358916E-006

***** Using the generic FFT engine *****


Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.


mpirun noticed that process rank 0 with PID 0 on node turbulence exited on signal 11 (Segmentation fault).

Move case-specific intitlization / finalization to dedicated case subroutine

The file xcompact3d.f90 contains case-specific open / close statements. This should be moved inside the corresponding init and finalize subroutines in BC-xxx.f90.

if (itype==2) then

if (itype==2) then

However, the case/init subroutine is called only when irestart==0. It might be relevant to adapt the initial stage of xcompact3d. For instance :

  • always call a subroutine case/boot
  • then call the subroutine case/init when irestart==0

The case-specific open / close statements could then be moved inside case/boot and case/finalize

Force Calculation on Moving objects

I wish to know that whether the forces.f90 module works for both moving objects and the stationary objects or only for the later one. Could someone please comment on it?
When I tried to calculate the forces acting on a transversely oscillating Cylinder (Forced Oscillation) I've observed the Spurious oscillations in the Coefficient of lift and Coefficient of Drag as shown below
C_l

Using Periodic B.C. with Alternating Forcing Direction IBM gives error

The paper titled "A simple and scalable immersed boundary method for high-fidelity simulations of fixed and moving objects on a Cartesian mesh" mentions the use of periodic boundary condition in y-direction along with stretching.

Extract from the paper,
A resolution of nx × ny = 385 × 384 is used, with a stretched mesh in the vertical direction towards the centre of the domain with the smallest mesh spacing being ymin = 0.04. Initial conditions are the same as in the previous section except for the boundary conditions. A uniform velocity u∞ = 1.0 is imposed at the inlet while a 1D-convection equation is imposed at the outlet. Periodic boundary conditions are imposed in the vertical direction.

But when setting the case using the examples input.i3d with the following values of the parameters,

nx=301               ! X-direction nodes
ny=301                 ! Y-direction nodes
nz=8                 ! Z-direction nodes
istret = 1            ! y mesh refinement (0:no, 1:center, 2:both sides, 3:bottom)
beta = 0.259065151    ! Refinement parameter (beta)

! Domain
xlx = 16.      ! Lx (Size of the box in x-direction)
yly = 16.            ! Ly (Size of the box in y-direction)
zlz = 2.            ! Lz (Size of the box in z-direction)

! Boundary conditions
nclx1 = 2
nclxn = 2
ncly1 = 0
nclyn = 0
nclz1 = 0
nclzn = 0

iibm = 3.

I get the following error,
image

Is this a bug or whether it needs to be implemented manually ?

Error while creating custom geometry in BC-Cylinder.f90

Hello, I am trying to import a custom geomtery in the cylinder case by reading the coordinates of domain inside the geometry from a dat file and setting them equal to remp manually. When I compile and run the code using openmpi -np 2 xcompact3d, It starts the procedure for generation of the geometry but crashes giving the error.

The modification I did in the BC-Cylinder.f90. I have commented the do loops which generated the cylinder and added the following code. I have attached the epsilondat.txt file for your reference.

    open(2, file = 'src/epsilon.dat', status = 'unknown')
    read(2,*) len
    do i=1,len
       read(2,*) ex,ey,ez
       epsi(ex,ey,ez)=remp
    enddo
    close(2)

Error

[login07:194055:0:194055] Caught signal 11 (Segmentation fault: address not mapped to object at address 0x7fc61379ec98)
==== backtrace (tid: 194055) ====
 0 0x00000000007152dc __cyl_MOD_geomcomplex_cyl()  /scratch/19ae30032/flat_plate_low/src/BC-Cylinder.f90:85
 1 0x00000000007bb155 __genepsi_MOD_gene_epsi_3d()  /scratch/19ae30032/flat_plate_low/src/genepsi3d.f90:226
 2 0x00000000007c0a9d __genepsi_MOD_genepsi3d()  /scratch/19ae30032/flat_plate_low/src/genepsi3d.f90:134
 3 0x00000000007c173c init_xcompact3d_()  /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:195
 4 0x0000000000402b8f xcompact3d()  /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:23
 5 0x0000000000402b8f main()  /scratch/19ae30032/flat_plate_low/src/xcompact3d.f90:7
 6 0x0000000000022555 __libc_start_main()  ???:0
 7 0x0000000000403119 _start()  ???:0
=================================

Program received signal SIGSEGV: Segmentation fault - invalid memory reference.

Backtrace for this error:
#0  0x7fc6868cd62f in ???
#1  0x7152dc in __cyl_MOD_geomcomplex_cyl
        at src/BC-Cylinder.f90:85
#2  0x7bb154 in __genepsi_MOD_gene_epsi_3d
        at src/genepsi3d.f90:226
#3  0x7c0a9c in __genepsi_MOD_genepsi3d
        at src/genepsi3d.f90:134
#4  0x7c173b in init_xcompact3d_
        at src/xcompact3d.f90:195
#5  0x402b8e in xcompact3d
        at src/xcompact3d.f90:23
#6  0x402b8e in main
        at src/xcompact3d.f90:7
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 0 on node login07 exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
[login07:194046] 1 more process has sent help message help-mpi-btl-openib.txt / error in device init
[login07:194046] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages

While debugging, the noticed that the subroutine geomcomplex_cyl, is being called multiple times. Is it normal ? I also printed the value of nxi,nxf, nyi, nyf, nzi,nzf, it keeps changing on its own.
image
I am not able to figure it out why this is happening. Can you help in this ?

Thanks !
Gaurav

Include the CMake build in the CI

Is your feature request related to a problem? Please describe.
Currently, the CI is only performing tests with the Makefile.

Describe the solution you'd like
The CI could also try to compile the code with CMake

Describe alternatives you've considered
N/A.

Additional context
Issue coming from #222

I wannna simulate flow around a 3D square cylinder, but I make some mistakes.

I meet some problems that I cannot figure out. I want to use the incompact3d code to simulate flow around a 3D square cylinder. I use the "cylinder" module and then my domain is xlx=24,yly=8.zlz=3. The length and width of the square cylinder are 2 and 1,respectively. The cex and cey of the square cylinder are 8 and 1, respectively. The X left and right for the volume control are 7.0 and 9.0, respectively. The Y bottom and top for the volume control are 0 and 2.5, respectively. But there are some mistakes. After the caculation, the outcome is the following picture. The vort except the square cylinder is no changing, it is always close to zero. The inflow and outflow speed are 0.05, and the Re is 300.
The changes in the code has been put in the file in the terms of screenshots.
新建 Microsoft Word 文档 (3).docx

Incorrect boundary conditions for immersed solids in z direction

Describe the bug
When attempting to simulate immersed objects with z normals, the calculated z boundaries of the solid are off by a factor of the parameter nraf - for example when I attempt to simulate a box with boundaries [2.5,3.5],[2.5,3.5],[2.5,3.5], the values stored in xi(),xf() etc are [2.5,3.5],[2.5,3.5],[0.25,0.35] if nraf = 10, or [2.5,3.5],[2.5,3.5],[1.25,1.75] if nraf = 2. (I printed these out when they are called in cubsplx, cubsply, cubsplz - in ibm.f90) I only caught this because these incorrect values lead to index bounds errors later in the code. Of course, if nraf=1 then the problem does not manifest, but I have a very coarse boundary which I would like to be able to improve. I have confirmed that the calculation of the epsi field is correct by visualising it directly - it behaves as it should.
It seems clear that there is a missing multiple of nraf somewhere in the z calculation, but I have been unable to find any inconsistencies.

To Reproduce

This can be replicated just be replacing the setup in BC-Cylinder.f90 with a different shape function and then running the stock input file with iibm=3. I have attached both files that will replicate the error. It works on the most current version of Incompact3d.

Steps to reproduce the behavior:

  1. Git version - Current.
  2. Just using the default "make BUILD=debug" on my ubuntu machine.
  3. I've attached the only changes I've made, as well as the input file.

Expected behavior
The program should run without issue (infact if you run with all optimisations, sometimes the bounds error doesn't seem to cause any major issues, and the solutions look ok).

I've attached a screenshot of the readouts that confirmed what I have so far.
Screenshot 2023-11-03 124005
BC-Cylinder.f90.txt
input.i3d.txt

Thank you very much for your time.

Issues with istret=3

I am implementing a temporally developing turbulent boundary layer following the method of Kozul et al. 2016. The basis of this approach uses a fixed wall at the top of the computation domain modelled as a no-slip wall and a uniform moving wall at the bottom and periodic BCs in the other directions. The Poisson equation uses Neuman BC in the wall-normal directions (same as doubly periodic channel flow). This method works fine if istret=2 or istret=0 is used but fails after less than 10 timesteps if istret=3 is used. I have reproduced this on the normal channel flow case with modified versions of the channel flow example on the latest commit. I am unsure if this is a limitation of istret=3, an issue with my case setup or a bug. The branch for the temporal boundary layer is found here. It is also noticeable that the divergence of velocity is substantially degraded when using istret=3 for example in turbulent boundary layer cases div U max and mean are ~10^{-7} whereas when using istret=0 or istret=2 it ~10^{-14}.

Steps to reproduce on commit d0c7397 on archer2

Build steps

  1. git clone -b master https://github.com/xcompact3d/Incompact3d.git .
  2. sed -i '30 i \\t\t BC-Cavity.f90' src/CMakeLists.txt (This file is missing from the CMakeLists.txt file)
  3. mkdir build && cd build
  4. module load cmake
  5. module load PrgEnv-gnu
  6. cmake -S .. -B .
  7. make

Test cases based on Channel example

  1. Coarse mesh with istret=0 (checks it is the coarse mesh at the boundary) - works
  1. Fine mesh with istret=2 (check stretching to both sides) - works
  1. Fine mesh with istret=3 - fails

Let m know if you need more information.

statistics read problem at restart

Describe the bug
A clear and concise description of what the bug is.

A possible bug in reading statistics when restart.
The mean flow statistics files were overwritten before reading previous mean flow field.

the program calls xcompact3d -> postprocessing -> overall_statistic

In the beginning of subroutine overall_statistic, it reads:

if (itime.lt.initstat) then
return
elseif (itime.eq.initstat) then
call init_statistic()
elseif (itime.eq.ifirst) then
call restart_statistic()
endif

For a restart case, itime>initstat and itime=ifirst-1, which will lead to the statistics being calculated and overwritten before reading a previous field.

This will lead to the statistics not correctly calculated.

To Reproduce
Steps to reproduce the behavior:

  1. Git version
  2. Compilation options
  3. Set-up
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

Potential memory leak in FFT subroutines

Hi,

Deallocate statements are missing in the 3D FFT subroutines.

allocate (wk1(ph%xsz(1),ph%xsz(2),ph%xsz(3)))

allocate (wk1(ph%zsz(1),ph%zsz(2),ph%zsz(3)))

allocate(wk1(sp%zsz(1),sp%zsz(2),sp%zsz(3)))

allocate(wk1(sp%xsz(1),sp%xsz(2),sp%xsz(3)))

Not computing gravitational term for density currents

The gravitational term is not computed if ilmn .eq. False (transeq.f90):

    if (ilmn) then
      !! Gravity
      if ((Fr**2).gt.zero) then
        call momentum_gravity(dux1, duy1, duz1, rho1(:,:,:,1) - one, one / Fr**2)
      endif
      do is = 1, numscalar
        call momentum_gravity(dux1, duy1, duz1, phi1(:,:,:,is), ri(is))
      enddo
    endif

That is a bug for the proper simulation of gravity currents.

The correction was proposed in fschuch/Xcompact3d#6. I will now include it in the main branch.

Typo in init_xcompact3d subroutine

Hello,
I think there is a typo in the init_xcompact3d subroutine in xcomapact3d.f90. genepsi3d and body both the subroutines are being called for iibm==3. Although, it doesn't matter much, since genepsi3d will be called always.

  if ((iibm.eq.2).or.(iibm.eq.3)) then
     call genepsi3d(ep1)
  else if ((iibm.eq.1).or.(iibm.eq.3)) then
     call body(ux1,uy1,uz1,ep1)
  endif

https://github.com/xcompact3d/Incompact3d/blob/a0bec26fdb5a002cfe351e3a398028aa0c93a35f/src/xcompact3d.f90#L235C2-L239

Avoid a large stack array used only in the ABL case

real(mytype),dimension(xsize(1),xsize(2),xsize(3),numscalar) :: T ! FIXME This can be huge

to the ABL module and allocate it at the beginning of the simulation ?

How about we make it allocatable and allocate it only in the ABL case?

I don't really understand why it doesn't just use the normal scalar variables for this and something like scalar_index_T to identify which scalar to use, however I think this should be a separate work as getting rid of the memory usage in the general case is immediately useful.

Originally posted by @pbartholomew08 in #176 (comment)

CMake Test directory doesn't appear to be regenerated

After changing examples/adios2_config.xml and running make install the adios2_config.xml in the installed examples is updated, but not under the Tests directory - causing tests to fail.

Current resolution is to rerun cmake - is this the correct behaviour? could make install also update the Test directory?

Unusual issues restarting cases with large meshes on Archer2

It appears that the restart does not work properly for large meshes using the MPI-IO build on ARCHER2. I have been using a fork of Incompact3D developed for implementing the recycling-rescaling method for spatially accelerating turbulent boundary layer method. However, I have reproduced it using the already implemented turbulent boundary layer case on Incompact3D. The issue occurs on large meshes (the one tested is at around 600M elements) and only after a restart where it appears that the data read in from the checkpoint file is incorrect notably for the velocity field although it seems the pressure is fine. I have reproduced the problem using a version of master branch modified to output the data immediately after the call restart. This was tested on ARCHER2 due to the large size of the mesh so this could be a problem with my environment but I have tested using different compilers with both O2 and O3 optimisation flags. The instructions to reproduce the problems are below although the issues should be apparent at least on commit (SHA: 723961c (02/08/2022)) with the maximum velocities appearing very different to the previously run simulation.

Tests

A modified version of commit 723961c has been tested such that the restarted simulations are immediately outputted after the call restart command. The program is then exited. The modified code can be found on branch master_testing of the Incompact3D fork. This should mean that the results show what has been restarted from file.

Summary of what has been tested

  1. A smaller mesh which runs correctly. Files attached below:
  2. A larger mesh which doesn't run correctly
  3. Case 2 using ADIOS2 which runs fine
    • uses same input files as case 2
    • adios used for the output rather than MPI-IO directly
    • adios2_config.xml same as channel flow example

To reproduce the error on archer2.

Getting the testing code.

I have attached the output of the git diff command to indicate how this has been changed from master which is the same as commit SHA 723961c (git_diff.log)

  1. mkdir Incompact3D && cd Incompact3D
  2. git init
  3. git remote add origin https://github.com/mattfalcone1997/Incompact3d.git
  4. git pull origin master_testing

Configuration and compilation options

  1. mkdir build && cd build
  2. Without Adios2:
    cmake -S .. -B .
    6.With ADIOS2
    cmake -S .. -B . -DUSE_ADIOS2=ON -DADIOS2_ROOT="adios2_build_path"
  3. make

Errors

Output files are attached. clear differences were observed in u, v , w maximum values after the restart and the output data seems to be wrong (images are attached).

Expected behaviour

I expect the simulation to restart and continue as it was before the restart

Notes

I have also tried with different compilers and I reduced the level of optimisation to O2 to no effect. I have not exhaustively tested the issue and the problem may well be related to my setup at compile or run time.

I have attempted to give as much information as I could but let me know if you have any questions
Thanks in advance,
Matthew Falcone

Large mesh

  • Before restart

    • velocity u
    • velocity v
    • pressure

  • After restart

    • velocity u
    • velocity v
    • pressure

Small mesh

  • Before restart

  • After restart

Large mesh with Adios2

  • Before restart

  • After restart

add topics

I suggest adding the topics navier-stokes, computational-fluid-dynamics, cfd, large-eddy-simulation in the About section.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.