GithubHelp home page GithubHelp logo

python_umat's Introduction

Redesign the umat in python


Why are we doing this?

How UMAT works

What happens in UMAT is simple: at its heart, the return-mapping-algorithm, solves a nonlinear system using the NR procedure. There are 48 unknowns/equations in this system. 24 are the amount of (plastic) slip and 24 are the plastic slip resistances. Because of the nonlinear nature of this system of equations, we solve this system by perturbing on unnkown at a time and form a jacobian, which we will use to update the values of the unknowns while iterating.

What we hope to achieve

  • If we can identify the forward pass in the UMAT (the one that we use as a starting point for perturbation), then we can implement that in pytorch. Then we can investigate if this forward pass will give us a gradient. If yes, we use this gradient to multiply with the residual vector

can pytorch compute the inverse of a jacobian?

If the forward pass is successful, we get the jacobian, but to complete the NR-procedure, we will need to multiply the inverse of this jacobian with the residual vector. Can we do this inversion?

code coverage

coverage run -m unittest discover coverage report -m

List of code modification

Implement the inverse gamma relation, as mentioned in Denny’s thesis.

Implemented. Currently does not yield good results. It could easily result in nan values and currently is not used. At the moment relying on penalties and data loss to guide the search.

Correct get_r_II function. In case of gi < si, we should have Δ\gammai and not zero.

This is already taken care of with the inclusion of inverse-gamma relation.

Implement barrier penalty formulation.

Factor drive.py into smaller routines.

Run UMAT-Fortran with the same orientation as the one we use in python-umat and collect statistics

Compare stress prior to yield from both methods

Implement stress calculation (Cauchy) in python-umat.

How many iterations are done in RETURNMAPPINGBCC

Residual at the end of the return mapping

How many times we call slip-system-check routine

Time that takes for a single run of return mapping algorithm

Make the code flexible enough so it runs with optimizers from optim modules, the ad-hoc optimizer, where we zero parameters (that violate constraints) after each iteration.

This is not considered anymore since we have shifted to PINNs framework.

Find small numerical “constrained optimization problems” to test various optimization algorithms.

This is not considered anymore since we have shifted to PINNs framework.

Write more tests

All the umat routine are tested and verified (constants, construction and batch computation). Additional tests are written to ensure the computation in fortran and pytorch result in the same values.

Read about L1-regularization (Could it lead to sparse parameter set?)

A very wishy-washy explanation, based on what I took from Goodfelllow’s book: The main effect of L1-regularization is that the gradient does not scale with the value of parameters, as is the case with L2-regularization. Instead, we get unit gradients for each parameter of the optimization. This, combine with a large penalty coefficient, pushes the parameter to the boundaries (or beyond, which in that case, it is being projected back to the boundary) of the subspace that defines the permissible space for optimization. In short, there is a higher chance, although not guaranteed, that L1-regularization will lead to sparse parameter set than L2-regularization.

Try LBFGS

  • How to include constraints in the formulation of LBFGS? This won’t be needed at the moment since our focus has shifted to compute the loss using PINNs approach. As such, solely optimizing the physical loss is no longer considered.

Add frequency for logging required data into tensorboard. Make sure that frequencies are provided for each logging item.

Try both gamma_dot equation in its original form and its inverse form. Add this to config_file as a switch

Replay buffer code.

Autoregression code should run from any point in the code (parameter alpha which is between 0 and 1)

ACTION: Parameter alpha is an argument to the inference funcion now. Tested and works fine.

Autoregression computes both stress and gamma/slipres values. Find proper error indicators for gamma.

Make sure that the computation of stresses from gamma/slip values is consistent and error free.

Submodule the UMAT/Fortran repo to use it for inference.

At inference time, we need to clip gamma and slip resistance that we get from the model

  • delta gamma should be strictly non-negative
  • slip resistance should be clipped between min and max slip resistance

ACTION: Both of these are implemented at both train and inference code.

How to deal with multiple, competing losses.

Implement a warmup period for where physics is not calculated. There are two approached to this:

  1. Try to see if relative error make sense. For example, as long as the physics loss is bigger than 10x data_loss don’t compute the physics loss
  2. Simply skip computing the physics loss for eg 100 iterations.

Clipping the gardient. Clip gradient of each loss separately. But for this we need to accumulate the gradients individually from each loss without the help from the optimizer. See how this should be implemented.

L2 regularization of weights of the network.

python_umat's People

Watchers

Sourena Yadegari avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.