GithubHelp home page GithubHelp logo

gridaptopopt.jl's Introduction

GridapTopOpt

Build Status Codecov

GridapTopOpt is computational toolbox for level set-based topology optimisation implemented in Julia and the Gridap package ecosystem. See the documentation and following publication for further details:

Zachary J. Wegert, Jordi Manyer, Connor Mallon, Santiago Badia, and Vivien J. Challis (2024). "GridapTopOpt.jl: A scalable Julia toolbox for level set-based topology optimisation". arXiv:2405.10478.

Documentation

  • STABLEDocumentation for the most recently tagged version.
  • DEVELOPDocumentation for the most recent in-development version.

Citation

In order to give credit to the GridapTopOpt contributors, we ask that you please reference the above paper along with the required citations for Gridap.

gridaptopopt.jl's People

Contributors

zjwegert avatar jordimanyer avatar

Stargazers

Mayank Shekhar avatar Shahak Kuba avatar  avatar Yuri Albuquerque avatar  avatar Santiago Badia avatar Peter Curry avatar  avatar

Watchers

Jesus Martínez Frutos avatar  avatar

Forkers

gabrieljie

gridaptopopt.jl's Issues

Embedded + Autodiff

So here are some notes on the implementation of Autodiff for embedded triangulations. A similar discussion on this can be found in here.

Overview of the problem:

We distinguish two cases:

  1. When differentiating the boundary, triangulations change. I.e some interior cells become cut, some cuts cells become interior, etc...
  2. The above does not happen, i.e all interior, cut and exterior cells remain the same. In this case, the domain is modified by moving around the cuts within their cut cell. I.e the number and shape of the sub-cells stays the same, but the area of the sub-cells is different.

Case 1:

  • Since the amount/type of interior/exterior/cut cells changes, quadrature points are basically created and/or destroyed.
  • Therefore, there is no 1-1 correspondence between quadrature points in the original and updated measure.
  • This makes it (I think) impossible to propagate dual numbers.

Case 2:

  • Quadratures in the sub-cells reference space are the same. Differentiability is given by a change in the geometric maps going from the sub-cell reference spaces to their parent cut cell.
  • Duality can be embedded within those geometric maps, which are produced from the values of our level-set.

Open questions:

  • Is there a way to always ensure we fall on case 2? For example, using AGFEM?
  • If not, is there a way to stabilize this to still obtain acceptable results?

Autodiff broken for RepeatingAffineFEStateMap

The function adjoint_solve!(φ_to_u::RepeatingAffineFEStateMap,du::AbstractVector) expects an array of arrays for du (or something similar for the map) but is being passed as vector/pvector by j_pullback. This appears to be happening because in ChainRules.jl line 178 we take assemble_vector!(∂j∂u_vec,assem_U,∂j∂u_vecdata). This is assembling over the multi-field space U but using assem_U for U0 and V0.

One way to fix this is create a new constructor for StateParamIntegrandWithMeasure:

function StateParamIntegrandWithMeasure(F::IntegrandWithMeasure,φ_to_u::RepeatingAffineFEStateMap)
  U,V,V_φ,U_reg = φ_to_u.spaces
  assem_deriv = get_deriv_assembler(φ_to_u)
  assem_U = SparseMatrixAssembler(U,V,...)
  StateParamIntegrandWithMeasure(F,U,V_φ,U_reg,assem_U,assem_deriv)
end

However, this would then hard code the assembler assem_U and we would also need to get parameters in place of ....

@JordiManyer I've created this to remind us that this is currently bugged.

Analytic gradient breaks in parallel for integrals of certain measures

When an optimisation function does not contain a measure over the whole computational domain but the analytic derivative does, the ghost DoFs will not be allocated. I.e., for J(u,φ,dΩ,dΓ_N) = ∫(g*u)dΓ_N with analytic shape derivative dJ(q,u,φ,dΩ,dΓ_N) = ∫(κ*∇(u)⋅∇(u)*q*(DH ∘ φ)*(norm ∘ ∇(φ)))dΩ, an error will be thrown saying AssertionError: You are trying to set a value that is not stored in the local portion.

The current work around for this problem is to modify J to be J(u,φ,dΩ,dΓ_N) = ∫(g*u)dΓ_N + ∫(0)dΩ.

The hope is that the move to PartitionedArrays 0.4 will help resolve this issue.

Test scripts available at scripts/_dev/bug_issue_46/...

Broken stuff

  • Reinitialisation for high order distributed FESpaces is broken. MWE in parallel_reinit_order=2.jl
  • dof permutation does not work for periodic models
  • Elasticity solver for high-order periodic bcs
  • Elasticity solver for multifield FESpaces. MWE in scripts/MPI/MPI_main_inverse_homenisation_AGM.jl

Functions of IntegrandWithMeasure

An issue for allowing something like the following
F(u,φ,dΩ1,dΩ2,...) = ∫(f)dΩ1*∫(g)dΩ2
and
G(u,φ,dΩ1,dΩ2,...) = ∫(f)dΩ1/∫(g)dΩ2
with AD capability as discussed on Slack @JordiManyer.

Enhancement for paper: Refactor src, Write docs, Refactor package IO

Rather minor but I think we could improve the file structure in src.

src/
   Optimisers/...
   Utilities.jl
   Advection.jl
   VelocityExtension.jl
   Solvers.jl
   ChainRules.jl (*)
   GridapExtensions.jl (**)
   PackageName.jl (***)

The main change to the above is moving MaterialInterpolation.jl and Benchmarks.jl into Utilities.jl.

Note
*: I think we could rename this to be more clear to people who don't typically use AD. @JordiManyer?
**: To be moved at some point
***: Need to decide name

Hyperelasticity + PETSc + large deformations: `DIVERGED_INDEFINITE_PC`

The ElasticitySolver implemented for linear elasticity does not converge for the linear solve in hyper-elastic problems (e.g., Saint Venant-Kirchhoff or Neo-hookean, the latter is the dev Gridap tutorial) even with relatively small strains.

using Gridap, Gridap.MultiField, GridapDistributed, GridapPETSc, GridapSolvers, 
  PartitionedArrays, LSTO_Distributed, SparseMatricesCSR
using GridapSolvers: NewtonSolver

function main(mesh_partition,distribute,el_size,δₓ)
  ranks = distribute(LinearIndices((prod(mesh_partition),)))

  ## Parameters
  order = 1
  xmax,ymax,zmax=(2.0,1.0,1.0)
  dom = (0,xmax,0,ymax,0,zmax)
  γ = 0.1
  γ_reinit = 0.5
  max_steps = floor(Int,minimum(el_size)/3)
  tol = 1/(2order^2)*prod(inv,minimum(el_size))
  η_coeff = 2
  path = dirname(dirname(@__DIR__))*"/results/testing_hyper_elast"
  i_am_main(ranks) && ~isdir(path) && mkdir(path)

  ## FE Setup
  model = CartesianDiscreteModel(ranks,mesh_partition,dom,el_size)
  Δ = get_Δ(model)
  f_Γ_D(x) = (x[1]  0.0)
  f_Γ_N(x) = (x[1]  xmax)
  update_labels!(1,model,f_Γ_D,"Gamma_D")
  update_labels!(2,model,f_Γ_N,"Gamma_N")

  ## Triangulations and measures
  Ω = Triangulation(model)
  dΩ = Measure(Ω,2order)

  ## Spaces
  reffe = ReferenceFE(lagrangian,VectorValue{3,Float64},order)
  reffe_scalar = ReferenceFE(lagrangian,Float64,order)
  V = TestFESpace(model,reffe;dirichlet_tags=["Gamma_D","Gamma_N"])
  U = TrialFESpace(V,[VectorValue(0.0,0.0,0.0),VectorValue(δₓ,0.0,0.0)])
  V_φ = TestFESpace(model,reffe_scalar)
  V_reg = TestFESpace(model,reffe_scalar;dirichlet_tags=["Gamma_N"])
  U_reg = TrialFESpace(V_reg,0)

  ## Create FE functions
  # φh = interpolate(gen_lsf(4,0.2),V_φ)
  φh = interpolate(x->-1,V_φ)

  ## Interpolation and weak form
  interp = SmoothErsatzMaterialInterpolation= η_coeff*maximum(Δ))
  I,H,DH,ρ = interp.I,interp.H,interp.DH,interp.ρ

  ## Material properties and loading
  _E = 1000
  ν = 0.3
  μ, λ = _E/(2*(1 + ν)), _E*ν/((1 + ν)*(1 - 2*ν))

  ## Saint Venant–Kirchhoff law
  F(∇u) = one(∇u) + ∇u'
  E(F) = 0.5*( F'  F - one(F) )
  Σ(∇u) = λ*tr(E(F(∇u)))*one(∇u)+2*μ*E(F(∇u))
  T(∇u) = F(∇u)  Σ(∇u)
  res(u,v,φ,dΩ) = ((I  φ)*((T  (u))  (v)))dΩ

  ## Finite difference solver and level set function
  stencil = AdvectionStencil(FirstOrderStencil(3,Float64),model,V_φ,tol,max_steps)
  reinit!(stencil,φh,γ_reinit)

  ## Setup solver and FE operators
  Tm = SparseMatrixCSR{0,PetscScalar,PetscInt}
  Tv = Vector{PetscScalar}
  lin_solver = ElasticitySolver(V)
  nl_solver = NewtonSolver(lin_solver;maxiter=50,rtol=10^-8,verbose=i_am_main(ranks))

  state_map = NonlinearFEStateMap(
    res,U,V,V_φ,U_reg,φh,dΩ;
    assem_U = SparseMatrixAssembler(Tm,Tv,U,V),
    assem_adjoint = SparseMatrixAssembler(Tm,Tv,V,U),
    assem_deriv = SparseMatrixAssembler(Tm,Tv,U_reg,U_reg),
    nls = nl_solver, adjoint_ls = lin_solver
  )

  ## Optimiser
  u = LSTO_Distributed.forward_solve(state_map,φh)
  uh = FEFunction(U,u)
  write_vtk(Ω,path*"/struc_$δₓ",0,["phi"=>φh,"H(phi)"=>(H  φh),"|nabla(phi))|"=>(norm  (φh)),"uh"=>uh];iter_mod=1)
end

with_mpi() do distribute
  mesh_partition = (3,2,2)
  el_size = (50,25,25)
  hilb_solver_options = "-pc_type gamg -ksp_type cg -ksp_error_if_not_converged true 
    -ksp_converged_reason -ksp_rtol 1.0e-12 -mat_block_size 3
    -mg_levels_ksp_type chebyshev -mg_levels_esteig_ksp_type cg -mg_coarse_sub_pc_type cholesky"
  
  GridapPETSc.with(args=split(hilb_solver_options)) do
    main(mesh_partition,distribute,el_size,0.02)
    main(mesh_partition,distribute,el_size,0.05)
    main(mesh_partition,distribute,el_size,0.1)
  end
end

For the final case which corresponds to 5% strain the output looks like

KSP Object: 12 MPI processes
  type: cg
  maximum iterations=200, initial guess is zero
  tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
  left preconditioning
  using DEFAULT norm type for convergence test
PC Object: 12 MPI processes
  type: gamg
  PC has not been set up so information may be incomplete
    type is MULTIPLICATIVE, levels=0 cycles=unknown
      Cycles per PCApply=0
      Using externally compute Galerkin coarse grid matrices
      GAMG specific options
        Threshold for dropping small values in graph on each level =  
        Threshold scaling factor for each level not specified = 1.
        AGG specific options
          Number of levels to square graph 0
          Number smoothing steps 1
        Complexity:    grid = 0.    operator = 0.
  linear system matrix = precond matrix:
  Mat Object: 12 MPI processes
    type: mpiaij
    rows=99372, cols=99372
    total: nonzeros=7537680, allocated nonzeros=7537680
    total number of mallocs used during MatSetValues calls=0
      has attached near null space
      using I-node (on process 0) routines: found 2160 nodes, limit used is 5
KSP Object: 12 MPI processes
  type: cg
  maximum iterations=200, initial guess is zero
  tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
  left preconditioning
  using DEFAULT norm type for convergence test
PC Object: 12 MPI processes
  type: gamg
  PC has not been set up so information may be incomplete
    type is MULTIPLICATIVE, levels=0 cycles=unknown
      Cycles per PCApply=0
      Using externally compute Galerkin coarse grid matrices
      GAMG specific options
        Threshold for dropping small values in graph on each level =  
        Threshold scaling factor for each level not specified = 1.
        AGG specific options
          Number of levels to square graph 0
          Number smoothing steps 1
        Complexity:    grid = 0.    operator = 0.
  linear system matrix = precond matrix:
  Mat Object: 12 MPI processes
    type: mpiaij
    rows=99372, cols=99372
    total: nonzeros=7537680, allocated nonzeros=7537680
    total number of mallocs used during MatSetValues calls=0
      has attached near null space
      using I-node (on process 0) routines: found 2160 nodes, limit used is 5
--------------- Starting Newton-Raphson solver --------
  > Iteration   0 - Residuals: 1.04e+03,   1.00e+00 
Linear solve did not converge due to DIVERGED_INDEFINITE_MAT iterations 3
  > Iteration   1 - Residuals: 3.28e+02,   3.15e-01 
Linear solve did not converge due to DIVERGED_INDEFINITE_MAT iterations 6
  > Iteration   2 - Residuals: 2.05e+05,   1.97e+02 
Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 3
  > Iteration   3 - Residuals: 2.13e+05,   2.05e+02 
Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 2
  > Iteration   4 - Residuals: 2.17e+05,   2.09e+02 
Linear solve did not converge due to DIVERGED_INDEFINITE_PC iterations 2
  > Iteration   5 - Residuals: 1.70e+06,   1.64e+03 

... (and so on until error)

This may be because of asymmetry or max/min eigenvalues of resulting Jacobian? I messed around with PETSc settings etc and wasn't able to improve the situation.

@JordiManyer

TagBot trigger issue

This issue is used to trigger TagBot; feel free to unsubscribe.

If you haven't already, you should update your TagBot.yml to include issue comment triggers.
Please see this post on Discourse for instructions and more details.

If you'd like for me to do this for you, comment TagBot fix on this issue.
I'll open a PR within a few hours, please be patient!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.