GithubHelp home page GithubHelp logo

jdtoscano94 / learning-scientific_machine_learning_residual_based_attention_pinns_deeponets Goto Github PK

View Code? Open in Web Editor NEW
434.0 12.0 163.0 6.54 MB

Physics Informed Machine Learning Tutorials (Pytorch and Jax)

Jupyter Notebook 100.00%
deep-learning machine-learning neural-network neural-networks pytorch tutorial physicsinformedneuralnetworks piins deeponet inverse-problems

learning-scientific_machine_learning_residual_based_attention_pinns_deeponets's Introduction

Learning-PIML-in-Python-PINNs-DeepONets-RBA

Hi, I’m Juan Diego Toscano. Thanks for stopping by.

This repository will help you to get involved in the physics-informed machine learning world. Inside the Tutorials folders, you will find several step-by-step guides on the basic concepts required to run and understand Physics-informed Machine Learning models (from approximating functions, solving and discovering ODE/PDEs with PINNs, to solving parametric PDEs with DeepONets).

Also, for advanced users, you can find our latest research in PINNs to achieve state-of-the-art performance using residual-based attention (RBA).

I reviewed some of these problems on my YouTube channel, so please watch them if you have time.

PINNs Youtube Tutorial:https://youtu.be/AXXnSzmpyoI

Inverse PINNs Youtube Tutorial: https://youtu.be/77jChHTcbv0

PI-DeepONets Youtube Tutorial:https://youtu.be/YpNYVD9B_Js

Also, if you are interested and PINNs and Machine Learning, please consider subscribing to the Crunch Group (Brown University) Youtube channel. They upload weekly seminars on Scientific Machine Learning.

https://www.youtube.com/channel/UC2ZZB80udkRvWQ4N3a8DOKQ

Note: The tutorials in this repository were taken from:

DeepXDE library: https://deepxde.readthedocs.io/en/latest/

PINNs Repository 1: https://github.com/omniscientoctopus/Physics-Informed-Neural-Networks/tree/main/PyTorch/Burgers'%20Equation

PINNs Repository 2: https://github.com/alexpapados/Physics-Informed-Deep-Learning-Solid-and-Fluid-Mechanics.

DeepOnets Repository 1: https://github.com/PredictiveIntelligenceLab/Physics-informed-DeepONets

Also here is our official implementation of RBA weights in PyTorch:

RBA Repository: https://github.com/soanagno/rba-pinns

References

[1] Anagnostopoulos, S. J., Toscano, J. D., Stergiopulos, N., & Karniadakis, G. E. (2024). Residual-based attention in physics-informed neural networks. Computer Methods in Applied Mechanics and Engineering, 421, 116805.

[2] Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2017). Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561. http://arxiv.org/pdf/1711.10561v1

[3] Lu, L., Meng, X., Mao, Z., & Karniadakis, G. E. (1907). DeepXDE: A deep learning library for solving differential equations,(2019). URL http://arxiv. org/abs/1907.04502. https://arxiv.org/abs/1907.04502

[4] Rackauckas Chris, Introduction to Scientific Machine Learning through Physics-Informed Neural Networks. https://book.sciml.ai/notes/03/

[5] Repository: Physics-Informed-Neural-Networks (PINNs).https://github.com/omniscientoctopus/Physics-Informed-Neural-Networks/tree/main/PyTorch/Burgers'%20Equation

[6] Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2017). Physics Informed Deep Learning (part ii): Data-driven Discovery of Nonlinear Partial Differential Equations. arXiv preprint arXiv:1711.10566. https://arxiv.org/abs/1711.10566

[7] Repository: Physics-Informed Deep Learning and its Application in Computational Solid and Fluid Mechanics.https://github.com/alexpapados/Physics-Informed-Deep-Learning-Solid-and-Fluid-Mechanics.

[8] Lu, L., Jin, P., & Karniadakis, G. E. (2019). Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. arXiv preprint arXiv:1910.03193.

[9] Wang, S., Wang, H., & Perdikaris, P. (2021). Learning the solution operator of parametric partial differential equations with physics-informed DeepONets. Science advances, 7(40), eabi8605.

learning-scientific_machine_learning_residual_based_attention_pinns_deeponets's People

Contributors

jdtoscano94 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

learning-scientific_machine_learning_residual_based_attention_pinns_deeponets's Issues

Issue with using your code in Colab because of JAX!

Hi Juan,
Thanks for your helpful video. I subscribed!
However, I have a problem here!
the "from jax.experimental import optimizers" does not work for me! it says "cannot import name 'optimizers' from 'jax.experimental'".
Then I have to switch to CPU and also install "!pip install jax[cpu]==0.2.27" to work!
It is confusing for me and as I searched the net for other people.
Could you please let me know how you use GPU in your video? I have to use my CPU and it takes like a year for training!!!

Thank you

Wrong derivative on DeepONet example

On the DeepONet anti-derivative notebook, the last example shows how to approximate the anti-derivative for $u(x)=cos(2\pi x),∀x\in[0,1]$. However, the u_fn lambda is defined as $u(x)=-cos(2\pi x),∀x\in[0,1]$.

From what I could understand $s(x)$ would be equals to $-\frac{1}{2\pi}sin(2\pi x)$ in this case.

Actual text:


Let's obtain the anti-derivative of a trigonometric function. However, remember that this neural operator works for $x\in[0,1]$ when the antiderivative's initial value ($s(0)=0$). To fulfill that conditions, we will use $u(x)=cos(2\pi x),∀x\in[0,1]$.

#u_fn = lambda x, t: np.interp(t, X.flatten(), gp_sample)
u_fn = lambda t, x: -np.cos(2*np.pi*x) # Should be np.cos(2*np.pi*x)
# Input sensor locations and measurements
x = np.linspace(0, 1, m)
u = u_fn(None,x)
# Output sensor locations and measurements
y =random.uniform(key_train, (m,)).sort()

PINN model didn't perform good outside the training points

I run your simple ode solver using PINN. It seems that the model perform well on the training interval,

output

But couldn't manage to predict outside the training interval,

output 2

I was surprised, as the paper was considered so good (from my theoretical perspective). Isn't it should learn the periodicity of the solution? Do you have tested it and get better performance outside the training interval? Let me know your thought to improve it.

Thanks for your consideration.

how to deal with integral operator with DeepOnet?

A great repository!Thank you very much for your open source, i also really like your videos about this on your Youtube! It is so awesome!

I have a question about deeponet.

$$ \frac{d y}{d x}+y(x)=\int_0^x e^{t-x} y(t) d t $$

i want to use DeepOnet to solve this integro-differential equations, but i have no idea how to implement it. The main problem is that i don't want to approximate integral op-
erators numerically, i just hope to make it with DeepOnet

Is it possible to solve it using DeepOnet only?

Best Regrads!

An error occurs in SimpleODE

Thanks for your awesome work, but I found a few minor bugs in simpleODE.

First, the values x_BC and x_PDE are passed in the wrong places:

def loss(self,x_BC,x_PDE):
    ...
# train neural network
for i in range(steps):
    loss = model.loss(x_PDE,x_BC)# use mean squared error

However, def f_BC is

def f_BC(x):
  return torch.sin(x)

rather than

def f_BC(x):
  return torch.zeros_like(x)

So this code will still get the correct result when this error occurs.
The code corrected by me has been uploaded as an attachment
simple_ode.txt
, which may explain this bug more clearly.

May I ask how to represent a system of differential equations?

I want to ask if you think you can use PINN to solve this problem.

We will solve a simple ODE system:

$$ {\frac{dV}{dt}}=10- {G_{Na}m^3h(V-50)} - {G_{K}n^4(V+77)} - {G_{L}(V+54.387)}$$

$${\frac{dm}{dt}}=\left(\frac{0.1{(V+40)}}{1-e^\frac{-V-40}{10}}\right)(1-m) - \left(4e^{\frac{-V-65}{18}}\right)m $$

$$\frac{dh}{dt}= {\left(0.07e^{\frac{-V-65}{20}}\right)(1-h)} - \left(\frac{1}{1+e^\frac{-V-35}{10}}\right)h$$

$$\frac{dn}{dt}= {\left(\frac{0.01(V+55)}{1-e^\frac{-V-55}{10}}\right)}(1-n) - \left(0.125e^{\frac{-V-65}{80}}\right)n$$

$$\qquad \text{where} \quad t \in [0,7],$$

May I ask how to represent a system of differential equations?

with the initial conditions
$$V(0) = -65, \quad m(0) = 0.05 , \quad h(0) = 0.6 , \quad n(0) = 0.32 $$

The reference solution is here, where the parameters $G_{na},G_{k},G_{L}$ are gated variables and whose true values are 120, 36, and 0.3, respectivly.

The code below is the data description

data = np.load('g_Na=120_g_K=36_g_L=0.3.npz')
t = data['t']
v = data['v']
m = data['m']
h = data['h']
n = data['n']

ODE has redundant BC

In SimpleODE, two boundary conditions are set for the 1st order ODE. Why?
The integration constant C can be obtained from either y(0) or y(2*pi).

Question regarding the generalization of PINN

Hi, thank you so much for your work! I am wondering about the generalization ability of such PINN? I tried with a simple sine function, when training and testing within the range: [0,2pi], both training loss and validation loss are good. However, when I feed the network with a new set of x, ranges from [2pi, 4pi], then the prediction looks bad. Is it because that the network neve sees such number? I feel like it is memorizing the distribution of things it has been trained on, but not generalized to unseen floats?
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.