GithubHelp home page GithubHelp logo

mlresearchatosram / tmm_fast Goto Github PK

View Code? Open in Web Editor NEW
55.0 4.0 23.0 370 KB

tmm_fast is a lightweight package to speed up optical planar multilayer thin-film device computation. Developed by Alexander Luce (@Nerrror) in cooperation with Heribert Wankerl (@HarryTheBird).

License: MIT License

Python 100.00%
optics-simulation

tmm_fast's Issues

Error when dealing with absorbing material

When the substrate is an absorbing material, the calculation is correct. However, when it is placed at upper layer of the photonic structure with thickness large enough, for instance a structure like W(e. g. 1000 nm)/SiO2/W(substrate), the upper layer of W (Although this structure is meaningless) would lead to 'nan' in calculated reflectance.

Use `api-array-compat` as a backend

What do you think about the api-array-compat project?

It should be not so difficult to change the backend from PyTorch to api-array-compat, and it will give many advantages:

  • Simplify array type selection/verification logic
  • Add cupy support for those who want to calculate on GPU, but don't need backpropagation or don't want to install PyTorch
  • Make PyTorch dependence optional

error when running Appendix A.1

when running Appendix A.1 code

import tmm_fast as tmmf
import numpy as np
L = 12
d = np.random.uniform(20, 150, L)*1e-9 # thicknesses of the layers
d[0] = d[-1] = np.inf # set first and last layer as injection layer
n = np.random.uniform(1.2, 5, L) # random constant refractive index
n[-1] = 1 # outcoupling into air
wl = np.linspace(500, 900, 301)*1e-9
theta = np.deg2rad(np.linspace(0, 90, 301))
result = (tmmf.coh_tmm('s', n, d, theta, wl)['R']+ tmmf.coh_tmm('p', n, d, theta, wl)['R'])/2

get error

Traceback (most recent call last):
  File "d:\project\tmm_fast\appendix_01.py", line 10, in <module>
    result = (tmmf.coh_tmm('s', n, d, theta, wl)['R']+ tmmf.coh_tmm('p', n, d, theta, wl)['R'])/2
  File "d:\project\tmm_fast\tmm_fast\vectorized_tmm_dispersive_multistack.py", line 125, in coh_vec_tmm_disp_mstack
    check_inputs(N, T, lambda_vacuum, Theta)
  File "d:\project\tmm_fast\tmm_fast\vectorized_tmm_dispersive_multistack.py", line 498, in check_inputs
    assert N.ndim == 3, 'N is not of shape [S x L x W] (3d), as it is of dimension ' + str(N.ndim)
AssertionError: N is not of shape [S x L x W] (3d), as it is of dimension 2

tmm-fast version 0.2.1
os Windows 11
torch 2.4.0
numpy 2.0.1

Move from gym to gymnasium

Since the package gym is renamed to gymnasium, we have to also update the dependencies.
I have prepared a pull request for it.

It works only with GPU

Hi, if I try to run the demo script example_tmm.py in a CPU-only environment, the following error appears:
AssertionError: It's not clear which beam is incoming vs outgoing. Weird index maybe?

Is there any suggested fix or workaround? Thanks!

Provide tmm_fast via conda-forge

Dear devs,
thanks for providing this extremely useful module and the accompanying tutorial J. Opt. Soc. Am. A.
I used an earlier version for a recent publication and made sure to cite your work.

However, with Anaconda/conda being popular entities in the scientific/R&D community, I have a general request: Would it be possible to provide tmm_fast via conda-forge for the current and future releases via @MLResearchAtOSRAM ?

As I'm only a conda user, I cannot estimate the workload of this request (see conda docs here). Still, it would be nice to use the full capabilities of conda with your package (environments, dependencies etc.) as I found it to be much more resilient than pip features such as venv.

Thanks for considering my request!

Best,
superluminescent (Lukas)

error in incoherent example code

When running example_inc_tmm.py, I get an error AssertionError: It's not clear which beam is incoming vs outgoing. Weird index maybe? related to the forward angle check.

The other example file (example_tmm) runs without any errors.

Full trace below:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
File \\example_inc_tmm.py:46
     43 T[:, -1] = np.inf
     45 T = torch.from_numpy(T)
---> 46 O_fast = inc_tmm_fast(pol, M, T, mask, theta, wl, device='cpu')
     48 fig, ax = plt.subplots(2,1)
     49 cbar = ax[0].imshow(O_fast['R'][0].numpy(), aspect='auto')

File \\tmm_fast\vectorized_incoherent_tmm.py:152, in inc_vec_tmm_disp_lstack(pol, N, D, mask, theta, lambda_vacuum, device, timer)
    150 d = D[:, m_]
    151 d[:, 0] = d[:, -1] = np.inf
--> 152 forward = coh_tmm(
    153     pol, N_, d, snell_theta[0, :, m_[0], 0], lambda_vacuum, device
    154 )
    155 # the substack must be evaluated in both directions since we can have an incoming wave from the output side
    156 # (a reflection from an incoherent layer) and Reflectivit/Transmissivity can be different depending on the direction
    157 backward = coh_tmm(
    158     pol,
    159     N_.flip([1]),
   (...)
    163     device,
    164 )

File \\tmm_fast\vectorized_tmm_dispersive_multistack.py:134, in coh_vec_tmm_disp_mstack(pol, N, T, Theta, lambda_vacuum, device, timer)
    130    N = torch.tile(N, (num_wavelengths, 1)).T
    132 # SnellThetas is a tensor, for each stack and layer, the angle that the light travels
    133 # through the layer. Computed with Snell's law. Note that the "angles" may be complex!
--> 134 SnellThetas = SnellLaw_vectorized(N, Theta)
    137 theta = 2 * np.pi * torch.einsum('skij,sij->skij', torch.cos(SnellThetas), N)  # [theta,d, lambda]
    138 kz_list = torch.einsum('sijk,k->skij', theta, 1 / lambda_vacuum)  # [lambda, theta, d]

File \\tmm_fast\vectorized_tmm_dispersive_multistack.py:247, in SnellLaw_vectorized(n, th)
    238 # The first and last entry need to be the forward angle (the intermediate
    239 # layers don't matter, see https://arxiv.org/abs/1603.02720 Section 5)
    241 angles[:, :, 0] = torch.where(
    242     is_not_forward_angle(n[:, 0], angles[:, :, 0]).bool(),
    243     pi - angles[:, :, 0],
    244     angles[:, :, 0],
    245 )
    246 angles[:, :, -1] = torch.where(
--> 247     is_not_forward_angle(n[:, -1], angles[:, :, -1]).bool(),
    248     pi - angles[:, :, -1],
    249     angles[:, :, -1],
    250 )
    252 return angles

File \\tmm_fast\vectorized_tmm_dispersive_multistack.py:297, in is_not_forward_angle(n, theta)
    294 assert (ncostheta.real > -100 * EPSILON)[answer].all(), error_string
    295 assert ((n * torch.cos(torch.conj(theta))).real > -100 * EPSILON)[answer].all(), error_string
--> 297 assert (ncostheta.imag < 100 * EPSILON)[~answer].all(), error_string
    298 assert (ncostheta.real < 100 * EPSILON)[~answer].all(), error_string
    299 assert ((n * torch.cos(torch.conj(theta))).real < 100 * EPSILON)[~answer].all(), error_string

AssertionError: It's not clear which beam is incoming vs outgoing. Weird index maybe?
n: tensor([[2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j],
        [2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j,
         2.2000+0.0500j, 2.2000+0.0500j, 2.2000+0.0500j]],
       dtype=torch.complex128)   angle: tensor([[[0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j,  ...,
          0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j],
         [0.0160-0.0004j, 0.0160-0.0004j, 0.0160-0.0004j,  ...,
          0.0160-0.0004j, 0.0160-0.0004j, 0.0160-0.0004j],
         [0.0321-0.0007j, 0.0321-0.0007j, 0.0321-0.0007j,  ...,
          0.0321-0.0007j, 0.0321-0.0007j, 0.0321-0.0007j],
         ...,
         [0.4696-0.0115j, 0.4696-0.0115j, 0.4696-0.0115j,  ...,
          0.4696-0.0115j, 0.4696-0.0115j, 0.4696-0.0115j],
         [0.4709-0.0116j, 0.4709-0.0116j, 0.4709-0.0116j,  ...,
          0.4709-0.0116j, 0.4709-0.0116j, 0.4709-0.0116j],
         [0.4715-0.0116j, 0.4715-0.0116j, 0.4715-0.0116j,  ...,
          0.4715-0.0116j, 0.4715-0.0116j, 0.4715-0.0116j]],

        [[0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j,  ...,
          0.0000+0.0000j, 0.0000+0.0000j, 0.0000+0.0000j],
         [0.0160-0.0004j, 0.0160-0.0004j, 0.0160-0.0004j,  ...,
          0.0160-0.0004j, 0.0160-0.0004j, 0.0160-0.0004j],
         [0.0321-0.0007j, 0.0321-0.0007j, 0.0321-0.0007j,  ...,
          0.0321-0.0007j, 0.0321-0.0007j, 0.0321-0.0007j],
         ...,
         [0.4696-0.0115j, 0.4696-0.0115j, 0.4696-0.0115j,  ...,
          0.4696-0.0115j, 0.4696-0.0115j, 0.4696-0.0115j],
         [0.4709-0.0116j, 0.4709-0.0116j, 0.4709-0.0116j,  ...,
          0.4709-0.0116j, 0.4709-0.0116j, 0.4709-0.0116j],
         [0.4715-0.0116j, 0.4715-0.0116j, 0.4715-0.0116j,  ...,
          0.4715-0.0116j, 0.4715-0.0116j, 0.4715-0.0116j]]],
       dtype=torch.complex128)

Implementation of an equivalent to tmm.tmm_core.inc_tmm()

Hi again,
I was wondering if there are any plans to implement a vectorized version of the "incoherent, or partly-incoherent-partly-coherent, transfer matrix method" as implemented in the original tmm package by S. Byrnes?

For one of my projects, I reproduced the original functionality for a particular case (fitting to FTIR data of a DBR on a thick bulk slab, with vacuum as a sub- and superstrate).

I imagine that the partly-coherent method might benefit people in similar use cases, both in measurement and simulation.

My crude implementation already resulted in a considerable speed improvement compared to the original tmm code. If you're interested, I could take a deeper dive to generalize and optimize the code further for a possible contribution to tmm_fast.

I'm open to discussing this further should you be interested.

Best, superluminescent

Simplify dependencies

Since the tmm-fast is published on pipy.org I think it is better to keep dependencies as minimal as possible.

For instance, it would be better to remove dask from dependencies, as it is not directly used anywhere in the project.

The main functionality in vectorized_tmm_dispersive_multistack.py and tmm_fast_torch.py does not require gymnasium, matplotlib and seaborn, therefore we can declare them as optional ones.

Probably, it could be even better to split the TMM and AI functionality into two separate projects...

Request: Provide tmm_fast package via conda-forge

Dear devs,
thanks for providing this extremely useful module and the accompanying tutorial J. Opt. Soc. Am. A.
I used an earlier version for a recent publication and made sure to cite your work.

However, with Anaconda/conda being popular entities in the scientific/R&D community, I have a general request: Would it be possible to provide tmm_fast via conda-forge for the current and future releases via @MLResearchAtOSRAM ?

As I'm only a conda user, I cannot estimate the workload of this request (see conda docs here). Still, it would be nice to use the full capabilities of conda with your package (environments, dependencies etc.) as I found it to be much more resilient than pip features such as venv.

Thanks for considering my request!

Best,
superluminescent (Lukas)

input datatype for inc_vec_tmm_disp_lstack

inc_vec_tmm_disp_lstack must take torch data type as the input. numpy input result in the error message below:

AttributeError: 'numpy.ndarray' object has no attribute 'requires_grad'

BUG: Unexpected squeezing of Theta and lambda_vacuum inputs in coh_vec_tmm_disp_mstack

I have tried to compare the output of the coh_vec_tmm_disp_mstack function with the original tmm package.
For this I have reduced the Theta and lambda_vacuum vectors to a single element as following:

import numpy as np
from tmm_fast.vectorized_tmm_dispersive_multistack import coh_vec_tmm_disp_mstack as tmm

# wl = np.asarray([400., 500.,]) * 1e-9 # This works well
wl = np.asarray([400.,]) * 1e-9 # AssertionError: N and T are not of same shape, as they are of dimensions 3 and 2
# theta = np.asarray([0., 45.,]) # This also works well
theta = np.asarray([0.,]) # IndexError: tuple index out of range
mode = 'T'
num_layers = 4
num_stacks = 128

refractive_index = np.ones([num_stacks, num_layers, wl.shape[0]])
thickness = np.ones([num_stacks, num_layers]) * 100

tmm(
    pol="s",
    N=refractive_index,
    T=thickness,
    Theta=theta,
    lambda_vacuum=wl,
)

However, I have faced the following error in the case of monochromatic frequency:

AssertionError: N and T are not of same shape, as they are of dimensions 3 and 2

And in the case of a single angle:

IndexError: tuple index out of range

I think that both of these problems could be fixed by removing unnecessary squeeze() from the converter function in vectorized_tmm_dispersive_multistack module:

def converter(data, device):
    if type(data) is not torch.Tensor:
        if type(data) is np.ndarray:
            data = torch.from_numpy(data.copy())
        else:
            raise ValueError('At least one of the inputs (i.e. N, Theta, ...) is not of type numpy.array or torch.Tensor!')
    data = data.type(torch.cfloat).to(device)
    return data.squeeze()

Calculation result different from tmm package

Hi,

I would like to use tmm_fast for reflection calculation an optimisation expecially in a context like example 6 in tmm examples.

I took the example from the original tmm package and tried to reimplement the same calculation in tmm_fast. The outputs do not match.

import numpy as np
import matplotlib.pyplot as plt
import torch
## tmm

import tmm as tmm
"""
An example reflection plot with a surface plasmon resonance (SPR) dip.
Compare with http://doi.org/10.2320/matertrans.M2010003 ("Spectral and
Angular Responses of Surface Plasmon Resonance Based on the Kretschmann
Prism Configuration") Fig 6a
"""
# list of layer thicknesses in nm
d_list = [np.inf, 5, 30, np.inf]
# list of refractive indices
n_list = [1.517, 3.719+4.362j, 0.130+3.162j, 1]
# wavelength in nm
lam_vac = 633
# list of angles to plot
theta_list = np.linspace(30*np.pi/180, 60*np.pi/180, num=300)
# initialize lists of y-values to plot
Rp = []
for theta in theta_list:
    Rp.append(tmm.coh_tmm('p', n_list, d_list, theta, lam_vac)['R'])
    
## tmm_fast
from tmm_fast import coh_tmm

pol="p"
n_list= torch.tensor(n_list, dtype=torch.complex64)
d_list= np.array(d_list)

N = torch.zeros(1,4,1, dtype=torch.complex128)
N[0,:,0]= n_list

T=torch.Tensor(1,4)
T[0,:] = torch.Tensor(d_list)

theta = np.linspace(30*np.pi/180, 60*np.pi/180, num=300)
lambda_vacuum = torch.Tensor([633])
theta_incidence = torch.Tensor(theta)

tm = coh_tmm(pol,N,T,theta_incidence,lambda_vacuum)    

fig,ax = plt.subplots(1,2,figsize=(8,4))
ax[0].plot(theta_list/np.pi*180, Rp, 'blue')
ax[1].plot(theta_list/np.pi*180, np.array(tm['R']).reshape(-1,1), 'blue')
for a in ax:
    a.set_xlabel('theta (degree)')
    a.set_ylabel('Fraction reflected')
    a.set_xlim(30, 60)
    a.set_ylim(0, 1)
plt.suptitle('Reflection of p-polarized light with Surface Plasmon Resonance\n'
          'Compare with http://doi.org/10.2320/matertrans.M2010003 Fig 6a')
plt.show()

Which produces the attached output (left original tmm; right tmm_fast; the original tmm output is the expected outcome).
tmm_fast_coh_vec_tmm_disp_mstack_result

Am I using the package incorrectly or is there some other issue? tmm_fast is in version 0.2.1. tmm in 0.1.8

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.